I0822 18:34:49.467408 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0822 18:34:49.498632 6 e2e.go:109] Starting e2e run "ee002a9f-d561-4108-9d92-5c0834ec0275" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598121288 - Will randomize all specs Will run 278 of 4844 specs Aug 22 18:34:49.558: INFO: >>> kubeConfig: /root/.kube/config Aug 22 18:34:49.561: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 22 18:34:49.713: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 22 18:34:50.271: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 22 18:34:50.271: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 22 18:34:50.271: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 22 18:34:50.277: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 22 18:34:50.277: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 22 18:34:50.277: INFO: e2e test version: v1.17.11 Aug 22 18:34:50.278: INFO: kube-apiserver version: v1.17.5 Aug 22 18:34:50.278: INFO: >>> kubeConfig: /root/.kube/config Aug 22 18:34:50.281: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:34:50.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset Aug 22 18:34:52.428: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:34:52.429: INFO: Creating ReplicaSet my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd Aug 22 18:34:52.845: INFO: Pod name my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd: Found 0 pods out of 1 Aug 22 18:34:57.863: INFO: Pod name my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd: Found 1 pods out of 1 Aug 22 18:34:57.863: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd" is running Aug 22 18:34:59.953: INFO: Pod "my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd-bd62d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 18:34:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 18:34:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 18:34:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 18:34:53 +0000 UTC Reason: Message:}]) Aug 22 18:34:59.953: INFO: Trying to dial the pod Aug 22 18:35:05.191: INFO: Controller my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd: Got expected result from replica 1 [my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd-bd62d]: "my-hostname-basic-700dbdd0-7e2b-4c07-b176-139c1c747ccd-bd62d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:35:05.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4337" for this suite. • [SLOW TEST:14.917 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":1,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:35:05.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 22 18:35:08.969: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 22 18:35:11.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718108, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 22 18:35:13.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718108, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 22 18:35:15.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718109, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718108, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 22 18:35:18.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:35:32.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1032" for this suite. STEP: Destroying namespace "webhook-1032-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:28.751 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":2,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:35:33.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0822 18:36:17.260662 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 22 18:36:17.260: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:36:17.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9396" for this suite. • [SLOW TEST:43.922 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":3,"skipped":102,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:36:17.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 22 18:36:32.771: INFO: Pod name wrapped-volume-race-2a9bd978-91fe-4914-9de2-33ac7e850816: Found 0 pods out of 5 Aug 22 18:36:37.785: INFO: Pod name wrapped-volume-race-2a9bd978-91fe-4914-9de2-33ac7e850816: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a9bd978-91fe-4914-9de2-33ac7e850816 in namespace emptydir-wrapper-682, will wait for the garbage collector to delete the pods Aug 22 18:37:03.616: INFO: Deleting ReplicationController wrapped-volume-race-2a9bd978-91fe-4914-9de2-33ac7e850816 took: 714.64117ms Aug 22 18:37:04.717: INFO: Terminating ReplicationController wrapped-volume-race-2a9bd978-91fe-4914-9de2-33ac7e850816 pods took: 1.100311042s STEP: Creating RC which spawns configmap-volume pods Aug 22 18:37:33.987: INFO: Pod name wrapped-volume-race-23ed2925-92b5-42b6-a3d3-6c1632ba4238: Found 0 pods out of 5 Aug 22 18:37:39.196: INFO: Pod name wrapped-volume-race-23ed2925-92b5-42b6-a3d3-6c1632ba4238: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23ed2925-92b5-42b6-a3d3-6c1632ba4238 in namespace emptydir-wrapper-682, will wait for the garbage collector to delete the pods Aug 22 18:38:12.780: INFO: Deleting ReplicationController wrapped-volume-race-23ed2925-92b5-42b6-a3d3-6c1632ba4238 took: 113.489228ms Aug 22 18:38:13.181: INFO: Terminating ReplicationController wrapped-volume-race-23ed2925-92b5-42b6-a3d3-6c1632ba4238 pods took: 400.252694ms STEP: Creating RC which spawns configmap-volume pods Aug 22 18:38:28.309: INFO: Pod name wrapped-volume-race-585840a0-f77e-44ae-8609-cdd9da33e4aa: Found 0 pods out of 5 Aug 22 18:38:34.827: INFO: Pod name wrapped-volume-race-585840a0-f77e-44ae-8609-cdd9da33e4aa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-585840a0-f77e-44ae-8609-cdd9da33e4aa in namespace emptydir-wrapper-682, will wait for the garbage collector to delete the pods Aug 22 18:39:06.694: INFO: Deleting ReplicationController wrapped-volume-race-585840a0-f77e-44ae-8609-cdd9da33e4aa took: 609.107106ms Aug 22 18:39:08.794: INFO: Terminating ReplicationController wrapped-volume-race-585840a0-f77e-44ae-8609-cdd9da33e4aa pods took: 2.100269482s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:39:49.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-682" for this suite. • [SLOW TEST:211.835 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":4,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:39:49.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Aug 22 18:39:49.873: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix061120658/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:39:49.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4459" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":5,"skipped":156,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:39:49.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-972534b7-9640-428f-9e5b-1fd2486a098d STEP: Creating a pod to test consume secrets Aug 22 18:39:50.104: INFO: Waiting up to 5m0s for pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389" in namespace "secrets-6412" to be "success or failure" Aug 22 18:39:50.113: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723331ms Aug 22 18:39:52.117: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013450553s Aug 22 18:39:54.656: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551918519s Aug 22 18:39:57.156: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389": Phase="Pending", Reason="", readiness=false. Elapsed: 7.051734629s Aug 22 18:39:59.262: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.157853299s STEP: Saw pod success Aug 22 18:39:59.262: INFO: Pod "pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389" satisfied condition "success or failure" Aug 22 18:39:59.274: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389 container secret-volume-test: STEP: delete the pod Aug 22 18:39:59.833: INFO: Waiting for pod pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389 to disappear Aug 22 18:39:59.904: INFO: Pod pod-secrets-a901ca2b-4cfe-463e-9250-4e2918574389 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:39:59.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6412" for this suite. • [SLOW TEST:10.001 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":168,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:39:59.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 22 18:40:01.718: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 22 18:40:04.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 22 18:40:06.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733718401, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 22 18:40:10.814: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:40:11.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3999" for this suite. STEP: Destroying namespace "webhook-3999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.178 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":7,"skipped":184,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:40:15.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-84418b43-48f9-4841-8b1f-65acab9961e2 STEP: Creating a pod to test consume secrets Aug 22 18:40:17.340: INFO: Waiting up to 5m0s for pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68" in namespace "secrets-7717" to be "success or failure" Aug 22 18:40:18.048: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Pending", Reason="", readiness=false. Elapsed: 707.834792ms Aug 22 18:40:20.302: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.9618465s Aug 22 18:40:22.331: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.991037834s Aug 22 18:40:25.037: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.697091074s Aug 22 18:40:27.619: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.279036261s Aug 22 18:40:30.151: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.810671579s STEP: Saw pod success Aug 22 18:40:30.151: INFO: Pod "pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68" satisfied condition "success or failure" Aug 22 18:40:30.154: INFO: Trying to get logs from node jerma-worker pod pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68 container secret-volume-test: STEP: delete the pod Aug 22 18:40:31.510: INFO: Waiting for pod pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68 to disappear Aug 22 18:40:31.898: INFO: Pod pod-secrets-34b9839c-e9c6-4636-b25c-71a951121c68 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:40:31.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7717" for this suite. • [SLOW TEST:17.236 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":205,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:40:32.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 22 18:40:43.141: INFO: 10 pods remaining Aug 22 18:40:43.141: INFO: 10 pods has nil DeletionTimestamp Aug 22 18:40:43.141: INFO: Aug 22 18:40:45.227: INFO: 0 pods remaining Aug 22 18:40:45.227: INFO: 0 pods has nil DeletionTimestamp Aug 22 18:40:45.227: INFO: Aug 22 18:40:46.937: INFO: 0 pods remaining Aug 22 18:40:46.937: INFO: 0 pods has nil DeletionTimestamp Aug 22 18:40:46.937: INFO: Aug 22 18:40:48.780: INFO: 0 pods remaining Aug 22 18:40:48.780: INFO: 0 pods has nil DeletionTimestamp Aug 22 18:40:48.780: INFO: Aug 22 18:40:50.530: INFO: 0 pods remaining Aug 22 18:40:50.530: INFO: 0 pods has nil DeletionTimestamp Aug 22 18:40:50.530: INFO: STEP: Gathering metrics W0822 18:40:53.057152 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 22 18:40:53.057: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:40:53.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8734" for this suite. • [SLOW TEST:22.001 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":9,"skipped":208,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:40:54.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 22 18:40:56.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-625' Aug 22 18:41:20.433: INFO: stderr: "" Aug 22 18:41:20.433: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 22 18:41:20.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:20.839: INFO: stderr: "" Aug 22 18:41:20.839: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-rzl7j " Aug 22 18:41:20.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:21.646: INFO: stderr: "" Aug 22 18:41:21.646: INFO: stdout: "" Aug 22 18:41:21.646: INFO: update-demo-nautilus-kvhwf is created but not running Aug 22 18:41:26.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:26.841: INFO: stderr: "" Aug 22 18:41:26.841: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-rzl7j " Aug 22 18:41:26.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:27.097: INFO: stderr: "" Aug 22 18:41:27.097: INFO: stdout: "" Aug 22 18:41:27.097: INFO: update-demo-nautilus-kvhwf is created but not running Aug 22 18:41:32.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:32.256: INFO: stderr: "" Aug 22 18:41:32.256: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-rzl7j " Aug 22 18:41:32.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:32.554: INFO: stderr: "" Aug 22 18:41:32.554: INFO: stdout: "true" Aug 22 18:41:32.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:32.637: INFO: stderr: "" Aug 22 18:41:32.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:32.637: INFO: validating pod update-demo-nautilus-kvhwf Aug 22 18:41:32.640: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:32.640: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:32.640: INFO: update-demo-nautilus-kvhwf is verified up and running Aug 22 18:41:32.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzl7j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:32.739: INFO: stderr: "" Aug 22 18:41:32.739: INFO: stdout: "true" Aug 22 18:41:32.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzl7j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:32.829: INFO: stderr: "" Aug 22 18:41:32.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:32.829: INFO: validating pod update-demo-nautilus-rzl7j Aug 22 18:41:32.833: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:32.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:32.833: INFO: update-demo-nautilus-rzl7j is verified up and running STEP: scaling down the replication controller Aug 22 18:41:32.945: INFO: scanned /root for discovery docs: Aug 22 18:41:32.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-625' Aug 22 18:41:34.727: INFO: stderr: "" Aug 22 18:41:34.727: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 22 18:41:34.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:35.058: INFO: stderr: "" Aug 22 18:41:35.058: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-rzl7j " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 22 18:41:40.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:40.695: INFO: stderr: "" Aug 22 18:41:40.695: INFO: stdout: "update-demo-nautilus-kvhwf " Aug 22 18:41:40.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:41.151: INFO: stderr: "" Aug 22 18:41:41.151: INFO: stdout: "true" Aug 22 18:41:41.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:41.871: INFO: stderr: "" Aug 22 18:41:41.871: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:41.871: INFO: validating pod update-demo-nautilus-kvhwf Aug 22 18:41:42.486: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:42.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:42.486: INFO: update-demo-nautilus-kvhwf is verified up and running STEP: scaling up the replication controller Aug 22 18:41:42.488: INFO: scanned /root for discovery docs: Aug 22 18:41:42.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-625' Aug 22 18:41:44.258: INFO: stderr: "" Aug 22 18:41:44.258: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 22 18:41:44.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:44.374: INFO: stderr: "" Aug 22 18:41:44.374: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-lxz2j " Aug 22 18:41:44.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:44.468: INFO: stderr: "" Aug 22 18:41:44.468: INFO: stdout: "true" Aug 22 18:41:44.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:44.567: INFO: stderr: "" Aug 22 18:41:44.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:44.567: INFO: validating pod update-demo-nautilus-kvhwf Aug 22 18:41:44.570: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:44.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:44.570: INFO: update-demo-nautilus-kvhwf is verified up and running Aug 22 18:41:44.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxz2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:44.663: INFO: stderr: "" Aug 22 18:41:44.663: INFO: stdout: "" Aug 22 18:41:44.663: INFO: update-demo-nautilus-lxz2j is created but not running Aug 22 18:41:49.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-625' Aug 22 18:41:49.908: INFO: stderr: "" Aug 22 18:41:49.908: INFO: stdout: "update-demo-nautilus-kvhwf update-demo-nautilus-lxz2j " Aug 22 18:41:49.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:50.396: INFO: stderr: "" Aug 22 18:41:50.396: INFO: stdout: "true" Aug 22 18:41:50.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:51.232: INFO: stderr: "" Aug 22 18:41:51.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:51.232: INFO: validating pod update-demo-nautilus-kvhwf Aug 22 18:41:51.469: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:51.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:51.469: INFO: update-demo-nautilus-kvhwf is verified up and running Aug 22 18:41:51.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxz2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:51.572: INFO: stderr: "" Aug 22 18:41:51.572: INFO: stdout: "true" Aug 22 18:41:51.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxz2j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-625' Aug 22 18:41:51.696: INFO: stderr: "" Aug 22 18:41:51.696: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:41:51.696: INFO: validating pod update-demo-nautilus-lxz2j Aug 22 18:41:52.157: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:41:52.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:41:52.157: INFO: update-demo-nautilus-lxz2j is verified up and running STEP: using delete to clean up resources Aug 22 18:41:52.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-625' Aug 22 18:41:52.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 22 18:41:52.440: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 22 18:41:52.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-625' Aug 22 18:41:53.284: INFO: stderr: "No resources found in kubectl-625 namespace.\n" Aug 22 18:41:53.284: INFO: stdout: "" Aug 22 18:41:53.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-625 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 22 18:41:53.380: INFO: stderr: "" Aug 22 18:41:53.380: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:41:53.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-625" for this suite. • [SLOW TEST:59.198 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":10,"skipped":211,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:41:53.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-31a18d64-67da-4b08-880a-376b1d939f3f STEP: Creating a pod to test consume configMaps Aug 22 18:41:54.895: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773" in namespace "projected-8809" to be "success or failure" Aug 22 18:41:54.959: INFO: Pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773": Phase="Pending", Reason="", readiness=false. Elapsed: 63.685528ms Aug 22 18:41:56.964: INFO: Pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06870864s Aug 22 18:41:59.205: INFO: Pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309656344s Aug 22 18:42:01.443: INFO: Pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.547429919s STEP: Saw pod success Aug 22 18:42:01.443: INFO: Pod "pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773" satisfied condition "success or failure" Aug 22 18:42:01.609: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773 container projected-configmap-volume-test: STEP: delete the pod Aug 22 18:42:03.067: INFO: Waiting for pod pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773 to disappear Aug 22 18:42:03.320: INFO: Pod pod-projected-configmaps-969f8d33-3986-4c5f-9dba-f20faa704773 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:42:03.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8809" for this suite. • [SLOW TEST:10.106 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":222,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:42:03.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 22 18:42:06.194: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 22 18:42:06.973: INFO: Waiting for terminating namespaces to be deleted... Aug 22 18:42:07.539: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 22 18:42:07.835: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.835: INFO: Container app ready: true, restart count 0 Aug 22 18:42:07.835: INFO: ss-0 from statefulset-596 started at 2020-08-22 18:42:03 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.835: INFO: Container webserver ready: false, restart count 0 Aug 22 18:42:07.835: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.835: INFO: Container kube-proxy ready: true, restart count 0 Aug 22 18:42:07.835: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.835: INFO: Container kindnet-cni ready: true, restart count 0 Aug 22 18:42:07.835: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 22 18:42:07.980: INFO: pod-projected-secrets-681c826d-7ad2-455b-8533-3fc354fe7c22 from projected-5418 started at 2020-08-22 18:40:23 +0000 UTC (3 container statuses recorded) Aug 22 18:42:07.980: INFO: Container creates-volume-test ready: true, restart count 0 Aug 22 18:42:07.980: INFO: Container dels-volume-test ready: true, restart count 0 Aug 22 18:42:07.980: INFO: Container upds-volume-test ready: true, restart count 0 Aug 22 18:42:07.980: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.980: INFO: Container kindnet-cni ready: true, restart count 0 Aug 22 18:42:07.980: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.980: INFO: Container app ready: true, restart count 0 Aug 22 18:42:07.980: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:42:07.980: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-621372f5-a311-4d97-bec2-3e8707c05079 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-621372f5-a311-4d97-bec2-3e8707c05079 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-621372f5-a311-4d97-bec2-3e8707c05079 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:42:49.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4762" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:45.467 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":12,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:42:49.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:42:59.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2899" for this suite. STEP: Destroying namespace "nsdeletetest-8211" for this suite. Aug 22 18:42:59.842: INFO: Namespace nsdeletetest-8211 was already deleted STEP: Destroying namespace "nsdeletetest-9672" for this suite. • [SLOW TEST:10.675 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":13,"skipped":252,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:42:59.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 22 18:43:00.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7221' Aug 22 18:43:01.329: INFO: stderr: "" Aug 22 18:43:01.329: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 22 18:43:06.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7221 -o json' Aug 22 18:43:06.480: INFO: stderr: "" Aug 22 18:43:06.480: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-22T18:43:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7221\",\n \"resourceVersion\": \"2534747\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7221/pods/e2e-test-httpd-pod\",\n \"uid\": \"a3a61d51-53d2-47ef-812f-fccf2a817d17\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vvrwq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vvrwq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vvrwq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-22T18:43:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-22T18:43:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-22T18:43:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-22T18:43:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://edb4dad6518c7f7308365ae187cf3e8740e026241921531671a194f865580c7d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-22T18:43:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.3\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.3\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-22T18:43:01Z\"\n }\n}\n" STEP: replace the image in the pod Aug 22 18:43:06.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7221' Aug 22 18:43:07.042: INFO: stderr: "" Aug 22 18:43:07.042: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801 Aug 22 18:43:07.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7221' Aug 22 18:43:19.349: INFO: stderr: "" Aug 22 18:43:19.349: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:43:19.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7221" for this suite. • [SLOW TEST:20.042 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792 should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":14,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:43:19.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3648 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3648 to expose endpoints map[] Aug 22 18:43:22.063: INFO: successfully validated that service endpoint-test2 in namespace services-3648 exposes endpoints map[] (514.33221ms elapsed) STEP: Creating pod pod1 in namespace services-3648 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3648 to expose endpoints map[pod1:[80]] Aug 22 18:43:29.392: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.222705504s elapsed, will retry) Aug 22 18:43:30.399: INFO: successfully validated that service endpoint-test2 in namespace services-3648 exposes endpoints map[pod1:[80]] (7.229715545s elapsed) STEP: Creating pod pod2 in namespace services-3648 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3648 to expose endpoints map[pod1:[80] pod2:[80]] Aug 22 18:43:39.451: INFO: successfully validated that service endpoint-test2 in namespace services-3648 exposes endpoints map[pod1:[80] pod2:[80]] (9.048182447s elapsed) STEP: Deleting pod pod1 in namespace services-3648 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3648 to expose endpoints map[pod2:[80]] Aug 22 18:43:41.827: INFO: successfully validated that service endpoint-test2 in namespace services-3648 exposes endpoints map[pod2:[80]] (2.311493899s elapsed) STEP: Deleting pod pod2 in namespace services-3648 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3648 to expose endpoints map[] Aug 22 18:43:43.403: INFO: successfully validated that service endpoint-test2 in namespace services-3648 exposes endpoints map[] (1.572841248s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:43:45.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3648" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.393 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":15,"skipped":334,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:43:45.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 22 18:43:58.469: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:43:59.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4324" for this suite. • [SLOW TEST:14.298 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":339,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:43:59.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 22 18:43:59.859: INFO: >>> kubeConfig: /root/.kube/config Aug 22 18:44:02.958: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:44:16.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-777" for this suite. • [SLOW TEST:17.089 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":17,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:44:16.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 22 18:44:24.383: INFO: Successfully updated pod "pod-update-df364e1b-e3e9-4aef-8611-e9695c61d48f" STEP: verifying the updated pod is in kubernetes Aug 22 18:44:24.420: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:44:24.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4131" for this suite. • [SLOW TEST:7.755 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":378,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:44:24.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:44:34.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8502" for this suite. • [SLOW TEST:10.555 seconds] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":381,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:44:34.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-1839 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1839 STEP: Deleting pre-stop pod Aug 22 18:44:56.770: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:44:57.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1839" for this suite. • [SLOW TEST:22.607 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":20,"skipped":394,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:44:57.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:44:59.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:45:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6192" for this suite. • [SLOW TEST:11.477 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":21,"skipped":396,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:45:09.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3014.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.218.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.218.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.218.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.218.228_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3014.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.218.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.218.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.218.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.218.228_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 22 18:45:28.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.684: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.703: INFO: Unable to read jessie_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.705: INFO: Unable to read jessie_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.707: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.709: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:28.721: INFO: Lookups using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff failed for: [wheezy_udp@dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_udp@dns-test-service.dns-3014.svc.cluster.local jessie_tcp@dns-test-service.dns-3014.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local] Aug 22 18:45:34.303: INFO: Unable to read wheezy_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:34.647: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:34.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:34.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:35.764: INFO: Unable to read jessie_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:35.767: INFO: Unable to read jessie_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:35.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:35.772: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:35.805: INFO: Lookups using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff failed for: [wheezy_udp@dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_udp@dns-test-service.dns-3014.svc.cluster.local jessie_tcp@dns-test-service.dns-3014.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local] Aug 22 18:45:38.724: INFO: Unable to read wheezy_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:38.726: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:38.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:38.730: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:39.165: INFO: Unable to read jessie_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:39.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:39.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:39.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:41.626: INFO: Lookups using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff failed for: [wheezy_udp@dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_udp@dns-test-service.dns-3014.svc.cluster.local jessie_tcp@dns-test-service.dns-3014.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local] Aug 22 18:45:43.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:44.321: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:44.357: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:44.507: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:45.579: INFO: Unable to read jessie_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:45.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:45.586: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:45.589: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:46.321: INFO: Lookups using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff failed for: [wheezy_udp@dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@dns-test-service.dns-3014.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_udp@dns-test-service.dns-3014.svc.cluster.local jessie_tcp@dns-test-service.dns-3014.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3014.svc.cluster.local] Aug 22 18:45:48.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-3014.svc.cluster.local from pod dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff: the server could not find the requested resource (get pods dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff) Aug 22 18:45:57.340: INFO: Lookups using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff failed for: [wheezy_udp@dns-test-service.dns-3014.svc.cluster.local] Aug 22 18:46:00.194: INFO: DNS probes using dns-3014/dns-test-ed5cfd7b-987c-4bfd-a046-ece92205e3ff succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:46:03.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3014" for this suite. • [SLOW TEST:54.394 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":22,"skipped":397,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:46:03.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 22 18:46:05.049: INFO: Waiting up to 5m0s for pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d" in namespace "downward-api-790" to be "success or failure" Aug 22 18:46:05.521: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 472.614056ms Aug 22 18:46:07.717: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.667860785s Aug 22 18:46:10.285: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.236629395s Aug 22 18:46:12.520: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.470723541s Aug 22 18:46:15.323: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.27392987s Aug 22 18:46:17.411: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362119526s Aug 22 18:46:19.711: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.661796854s STEP: Saw pod success Aug 22 18:46:19.711: INFO: Pod "downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d" satisfied condition "success or failure" Aug 22 18:46:20.255: INFO: Trying to get logs from node jerma-worker2 pod downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d container dapi-container: STEP: delete the pod Aug 22 18:46:21.220: INFO: Waiting for pod downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d to disappear Aug 22 18:46:21.847: INFO: Pod downward-api-8e3cf1db-574b-47a0-9eb2-4370f78ed51d no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:46:21.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-790" for this suite. • [SLOW TEST:18.392 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:46:21.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 22 18:46:24.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc" in namespace "downward-api-3757" to be "success or failure" Aug 22 18:46:24.737: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 292.034563ms Aug 22 18:46:26.745: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299960456s Aug 22 18:46:29.209: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76419602s Aug 22 18:46:31.537: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.092877897s Aug 22 18:46:34.053: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.608643483s Aug 22 18:46:36.683: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.238817525s Aug 22 18:46:39.220: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Running", Reason="", readiness=true. Elapsed: 14.775058695s Aug 22 18:46:41.249: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.804702788s STEP: Saw pod success Aug 22 18:46:41.249: INFO: Pod "downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc" satisfied condition "success or failure" Aug 22 18:46:41.253: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc container client-container: STEP: delete the pod Aug 22 18:46:41.753: INFO: Waiting for pod downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc to disappear Aug 22 18:46:42.022: INFO: Pod downwardapi-volume-69ae64e0-9ee1-4dee-bf07-972c1ac0e6cc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:46:42.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3757" for this suite. • [SLOW TEST:20.208 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:46:42.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 22 18:46:43.186: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:47:01.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7898" for this suite. • [SLOW TEST:19.789 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":25,"skipped":535,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:47:01.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 22 18:47:17.652: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8975 PodName:pod-sharedvolume-168faed0-1671-43d6-8f14-77f2a47a5bc0 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:47:17.652: INFO: >>> kubeConfig: /root/.kube/config I0822 18:47:17.842635 6 log.go:172] (0xc001b768f0) (0xc002003400) Create stream I0822 18:47:17.842675 6 log.go:172] (0xc001b768f0) (0xc002003400) Stream added, broadcasting: 1 I0822 18:47:17.844704 6 log.go:172] (0xc001b768f0) Reply frame received for 1 I0822 18:47:17.844823 6 log.go:172] (0xc001b768f0) (0xc002e6d9a0) Create stream I0822 18:47:17.844840 6 log.go:172] (0xc001b768f0) (0xc002e6d9a0) Stream added, broadcasting: 3 I0822 18:47:17.845796 6 log.go:172] (0xc001b768f0) Reply frame received for 3 I0822 18:47:17.845821 6 log.go:172] (0xc001b768f0) (0xc002e6da40) Create stream I0822 18:47:17.845830 6 log.go:172] (0xc001b768f0) (0xc002e6da40) Stream added, broadcasting: 5 I0822 18:47:17.847762 6 log.go:172] (0xc001b768f0) Reply frame received for 5 I0822 18:47:17.896661 6 log.go:172] (0xc001b768f0) Data frame received for 3 I0822 18:47:17.896701 6 log.go:172] (0xc002e6d9a0) (3) Data frame handling I0822 18:47:17.896712 6 log.go:172] (0xc002e6d9a0) (3) Data frame sent I0822 18:47:17.896817 6 log.go:172] (0xc001b768f0) Data frame received for 5 I0822 18:47:17.896836 6 log.go:172] (0xc001b768f0) Data frame received for 3 I0822 18:47:17.896851 6 log.go:172] (0xc002e6d9a0) (3) Data frame handling I0822 18:47:17.896865 6 log.go:172] (0xc002e6da40) (5) Data frame handling I0822 18:47:17.898211 6 log.go:172] (0xc001b768f0) Data frame received for 1 I0822 18:47:17.898228 6 log.go:172] (0xc002003400) (1) Data frame handling I0822 18:47:17.898238 6 log.go:172] (0xc002003400) (1) Data frame sent I0822 18:47:17.898251 6 log.go:172] (0xc001b768f0) (0xc002003400) Stream removed, broadcasting: 1 I0822 18:47:17.898324 6 log.go:172] (0xc001b768f0) Go away received I0822 18:47:17.898567 6 log.go:172] (0xc001b768f0) (0xc002003400) Stream removed, broadcasting: 1 I0822 18:47:17.898586 6 log.go:172] (0xc001b768f0) (0xc002e6d9a0) Stream removed, broadcasting: 3 I0822 18:47:17.898594 6 log.go:172] (0xc001b768f0) (0xc002e6da40) Stream removed, broadcasting: 5 Aug 22 18:47:17.898: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:47:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8975" for this suite. • [SLOW TEST:16.810 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":26,"skipped":541,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:47:18.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629 [It] should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 22 18:47:19.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4622' Aug 22 18:47:20.245: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 22 18:47:20.245: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 Aug 22 18:47:25.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4622' Aug 22 18:47:27.600: INFO: stderr: "" Aug 22 18:47:27.600: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:47:27.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4622" for this suite. • [SLOW TEST:9.823 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625 should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":27,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:47:28.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Aug 22 18:47:32.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4875 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 22 18:47:47.657: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0822 18:47:46.054620 754 log.go:172] (0xc000a48b00) (0xc0002c61e0) Create stream\nI0822 18:47:46.054677 754 log.go:172] (0xc000a48b00) (0xc0002c61e0) Stream added, broadcasting: 1\nI0822 18:47:46.061859 754 log.go:172] (0xc000a48b00) Reply frame received for 1\nI0822 18:47:46.061935 754 log.go:172] (0xc000a48b00) (0xc000756000) Create stream\nI0822 18:47:46.061952 754 log.go:172] (0xc000a48b00) (0xc000756000) Stream added, broadcasting: 3\nI0822 18:47:46.063086 754 log.go:172] (0xc000a48b00) Reply frame received for 3\nI0822 18:47:46.063114 754 log.go:172] (0xc000a48b00) (0xc0002c6280) Create stream\nI0822 18:47:46.063123 754 log.go:172] (0xc000a48b00) (0xc0002c6280) Stream added, broadcasting: 5\nI0822 18:47:46.064336 754 log.go:172] (0xc000a48b00) Reply frame received for 5\nI0822 18:47:46.064380 754 log.go:172] (0xc000a48b00) (0xc0007560a0) Create stream\nI0822 18:47:46.064390 754 log.go:172] (0xc000a48b00) (0xc0007560a0) Stream added, broadcasting: 7\nI0822 18:47:46.065965 754 log.go:172] (0xc000a48b00) Reply frame received for 7\nI0822 18:47:46.066083 754 log.go:172] (0xc000756000) (3) Writing data frame\nI0822 18:47:46.066537 754 log.go:172] (0xc000756000) (3) Writing data frame\nI0822 18:47:46.067277 754 log.go:172] (0xc000a48b00) Data frame received for 5\nI0822 18:47:46.067295 754 log.go:172] (0xc0002c6280) (5) Data frame handling\nI0822 18:47:46.067313 754 log.go:172] (0xc0002c6280) (5) Data frame sent\nI0822 18:47:46.068678 754 log.go:172] (0xc000a48b00) Data frame received for 5\nI0822 18:47:46.068702 754 log.go:172] (0xc0002c6280) (5) Data frame handling\nI0822 18:47:46.068842 754 log.go:172] (0xc0002c6280) (5) Data frame sent\nI0822 18:47:46.089865 754 log.go:172] (0xc000a48b00) Data frame received for 5\nI0822 18:47:46.089979 754 log.go:172] (0xc0002c6280) (5) Data frame handling\nI0822 18:47:46.090010 754 log.go:172] (0xc000a48b00) Data frame received for 7\nI0822 18:47:46.090033 754 log.go:172] (0xc000a48b00) (0xc000756000) Stream removed, broadcasting: 3\nI0822 18:47:46.090054 754 log.go:172] (0xc000a48b00) Data frame received for 1\nI0822 18:47:46.090070 754 log.go:172] (0xc0002c61e0) (1) Data frame handling\nI0822 18:47:46.090087 754 log.go:172] (0xc0002c61e0) (1) Data frame sent\nI0822 18:47:46.090105 754 log.go:172] (0xc000a48b00) (0xc0002c61e0) Stream removed, broadcasting: 1\nI0822 18:47:46.090140 754 log.go:172] (0xc0007560a0) (7) Data frame handling\nI0822 18:47:46.090201 754 log.go:172] (0xc000a48b00) Go away received\nI0822 18:47:46.090398 754 log.go:172] (0xc000a48b00) (0xc0002c61e0) Stream removed, broadcasting: 1\nI0822 18:47:46.090418 754 log.go:172] (0xc000a48b00) (0xc000756000) Stream removed, broadcasting: 3\nI0822 18:47:46.090429 754 log.go:172] (0xc000a48b00) (0xc0002c6280) Stream removed, broadcasting: 5\nI0822 18:47:46.090442 754 log.go:172] (0xc000a48b00) (0xc0007560a0) Stream removed, broadcasting: 7\n" Aug 22 18:47:47.657: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:47:50.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4875" for this suite. • [SLOW TEST:22.810 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843 should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":28,"skipped":585,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:47:51.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6903 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 22 18:47:52.921: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 22 18:48:33.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.21:8080/dial?request=hostname&protocol=http&host=10.244.2.27&port=8080&tries=1'] Namespace:pod-network-test-6903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:48:33.613: INFO: >>> kubeConfig: /root/.kube/config I0822 18:48:33.636025 6 log.go:172] (0xc001032210) (0xc002002960) Create stream I0822 18:48:33.636053 6 log.go:172] (0xc001032210) (0xc002002960) Stream added, broadcasting: 1 I0822 18:48:33.637928 6 log.go:172] (0xc001032210) Reply frame received for 1 I0822 18:48:33.637985 6 log.go:172] (0xc001032210) (0xc002f10aa0) Create stream I0822 18:48:33.638016 6 log.go:172] (0xc001032210) (0xc002f10aa0) Stream added, broadcasting: 3 I0822 18:48:33.638880 6 log.go:172] (0xc001032210) Reply frame received for 3 I0822 18:48:33.638915 6 log.go:172] (0xc001032210) (0xc00239a000) Create stream I0822 18:48:33.638932 6 log.go:172] (0xc001032210) (0xc00239a000) Stream added, broadcasting: 5 I0822 18:48:33.639743 6 log.go:172] (0xc001032210) Reply frame received for 5 I0822 18:48:33.712409 6 log.go:172] (0xc001032210) Data frame received for 3 I0822 18:48:33.712437 6 log.go:172] (0xc002f10aa0) (3) Data frame handling I0822 18:48:33.712457 6 log.go:172] (0xc002f10aa0) (3) Data frame sent I0822 18:48:33.713125 6 log.go:172] (0xc001032210) Data frame received for 5 I0822 18:48:33.713157 6 log.go:172] (0xc00239a000) (5) Data frame handling I0822 18:48:33.713184 6 log.go:172] (0xc001032210) Data frame received for 3 I0822 18:48:33.713200 6 log.go:172] (0xc002f10aa0) (3) Data frame handling I0822 18:48:33.714262 6 log.go:172] (0xc001032210) Data frame received for 1 I0822 18:48:33.714324 6 log.go:172] (0xc002002960) (1) Data frame handling I0822 18:48:33.714353 6 log.go:172] (0xc002002960) (1) Data frame sent I0822 18:48:33.714373 6 log.go:172] (0xc001032210) (0xc002002960) Stream removed, broadcasting: 1 I0822 18:48:33.714397 6 log.go:172] (0xc001032210) Go away received I0822 18:48:33.714494 6 log.go:172] (0xc001032210) (0xc002002960) Stream removed, broadcasting: 1 I0822 18:48:33.714513 6 log.go:172] (0xc001032210) (0xc002f10aa0) Stream removed, broadcasting: 3 I0822 18:48:33.714522 6 log.go:172] (0xc001032210) (0xc00239a000) Stream removed, broadcasting: 5 Aug 22 18:48:33.714: INFO: Waiting for responses: map[] Aug 22 18:48:33.750: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.21:8080/dial?request=hostname&protocol=http&host=10.244.1.19&port=8080&tries=1'] Namespace:pod-network-test-6903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:48:33.750: INFO: >>> kubeConfig: /root/.kube/config I0822 18:48:33.835231 6 log.go:172] (0xc001b76630) (0xc002e6c960) Create stream I0822 18:48:33.835265 6 log.go:172] (0xc001b76630) (0xc002e6c960) Stream added, broadcasting: 1 I0822 18:48:33.836869 6 log.go:172] (0xc001b76630) Reply frame received for 1 I0822 18:48:33.836894 6 log.go:172] (0xc001b76630) (0xc002002a00) Create stream I0822 18:48:33.836908 6 log.go:172] (0xc001b76630) (0xc002002a00) Stream added, broadcasting: 3 I0822 18:48:33.837690 6 log.go:172] (0xc001b76630) Reply frame received for 3 I0822 18:48:33.837728 6 log.go:172] (0xc001b76630) (0xc002306000) Create stream I0822 18:48:33.837739 6 log.go:172] (0xc001b76630) (0xc002306000) Stream added, broadcasting: 5 I0822 18:48:33.838488 6 log.go:172] (0xc001b76630) Reply frame received for 5 I0822 18:48:33.893136 6 log.go:172] (0xc001b76630) Data frame received for 3 I0822 18:48:33.893167 6 log.go:172] (0xc002002a00) (3) Data frame handling I0822 18:48:33.893193 6 log.go:172] (0xc002002a00) (3) Data frame sent I0822 18:48:33.893336 6 log.go:172] (0xc001b76630) Data frame received for 5 I0822 18:48:33.893390 6 log.go:172] (0xc002306000) (5) Data frame handling I0822 18:48:33.893467 6 log.go:172] (0xc001b76630) Data frame received for 3 I0822 18:48:33.893486 6 log.go:172] (0xc002002a00) (3) Data frame handling I0822 18:48:33.894671 6 log.go:172] (0xc001b76630) Data frame received for 1 I0822 18:48:33.894689 6 log.go:172] (0xc002e6c960) (1) Data frame handling I0822 18:48:33.894703 6 log.go:172] (0xc002e6c960) (1) Data frame sent I0822 18:48:33.894720 6 log.go:172] (0xc001b76630) (0xc002e6c960) Stream removed, broadcasting: 1 I0822 18:48:33.894737 6 log.go:172] (0xc001b76630) Go away received I0822 18:48:33.894828 6 log.go:172] (0xc001b76630) (0xc002e6c960) Stream removed, broadcasting: 1 I0822 18:48:33.894864 6 log.go:172] (0xc001b76630) (0xc002002a00) Stream removed, broadcasting: 3 I0822 18:48:33.894875 6 log.go:172] (0xc001b76630) (0xc002306000) Stream removed, broadcasting: 5 Aug 22 18:48:33.894: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:48:33.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6903" for this suite. • [SLOW TEST:42.614 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":590,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:48:33.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 22 18:48:34.569: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:48:53.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-50" for this suite. • [SLOW TEST:20.443 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":30,"skipped":600,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:48:54.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-8d28b5eb-9e67-48f6-85f9-2cf5e926c627 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:48:54.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2501" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":31,"skipped":601,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:48:54.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 22 18:48:55.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e" in namespace "projected-1398" to be "success or failure" Aug 22 18:48:55.441: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 109.059312ms Aug 22 18:48:57.859: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5266292s Aug 22 18:49:00.532: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.199913883s Aug 22 18:49:02.581: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.248957115s Aug 22 18:49:04.760: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.428236156s Aug 22 18:49:07.061: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.728822226s Aug 22 18:49:09.365: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Running", Reason="", readiness=true. Elapsed: 14.032911739s Aug 22 18:49:11.484: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.152529059s STEP: Saw pod success Aug 22 18:49:11.485: INFO: Pod "downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e" satisfied condition "success or failure" Aug 22 18:49:11.487: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e container client-container: STEP: delete the pod Aug 22 18:49:11.706: INFO: Waiting for pod downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e to disappear Aug 22 18:49:11.712: INFO: Pod downwardapi-volume-d2684ba1-2cc6-41ba-b65d-ed6f0295ec1e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:49:11.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1398" for this suite. • [SLOW TEST:16.838 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":601,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:49:11.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f7b76ded-c58a-4865-9b50-7f7ffce6547a STEP: Creating a pod to test consume configMaps Aug 22 18:49:11.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005" in namespace "configmap-1504" to be "success or failure" Aug 22 18:49:12.031: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 108.315466ms Aug 22 18:49:14.599: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676231655s Aug 22 18:49:16.602: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.679726841s Aug 22 18:49:18.642: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.719310989s Aug 22 18:49:20.702: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779908967s Aug 22 18:49:23.497: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.57475159s Aug 22 18:49:26.321: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.398205921s Aug 22 18:49:29.202: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.279165309s STEP: Saw pod success Aug 22 18:49:29.202: INFO: Pod "pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005" satisfied condition "success or failure" Aug 22 18:49:29.485: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005 container configmap-volume-test: STEP: delete the pod Aug 22 18:49:32.459: INFO: Waiting for pod pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005 to disappear Aug 22 18:49:32.809: INFO: Pod pod-configmaps-c44e4a8a-92d5-44b7-999f-6c6fbbba6005 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:49:32.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1504" for this suite. • [SLOW TEST:22.933 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":619,"failed":0} [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:49:34.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 22 18:49:37.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1271' Aug 22 18:49:38.267: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 22 18:49:38.267: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Aug 22 18:49:39.624: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-l727j] Aug 22 18:49:39.624: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-l727j" in namespace "kubectl-1271" to be "running and ready" Aug 22 18:49:40.226: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 602.510592ms Aug 22 18:49:42.671: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047326658s Aug 22 18:49:44.856: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 5.232569127s Aug 22 18:49:47.007: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 7.383574653s Aug 22 18:49:49.342: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 9.718093651s Aug 22 18:49:51.671: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.046812566s Aug 22 18:49:54.162: INFO: Pod "e2e-test-httpd-rc-l727j": Phase="Running", Reason="", readiness=true. Elapsed: 14.537799768s Aug 22 18:49:54.162: INFO: Pod "e2e-test-httpd-rc-l727j" satisfied condition "running and ready" Aug 22 18:49:54.162: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-l727j] Aug 22 18:49:54.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1271' Aug 22 18:49:56.100: INFO: stderr: "" Aug 22 18:49:56.100: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.25. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.25. Set the 'ServerName' directive globally to suppress this message\n[Sat Aug 22 18:49:51.541218 2020] [mpm_event:notice] [pid 1:tid 139650880527208] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Aug 22 18:49:51.541264 2020] [core:notice] [pid 1:tid 139650880527208] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Aug 22 18:49:56.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1271' Aug 22 18:49:58.055: INFO: stderr: "" Aug 22 18:49:58.055: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:49:58.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1271" for this suite. • [SLOW TEST:24.140 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":34,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:49:58.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 22 18:50:02.141: INFO: Waiting up to 5m0s for pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97" in namespace "emptydir-5503" to be "success or failure" Aug 22 18:50:02.642: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Pending", Reason="", readiness=false. Elapsed: 500.940523ms Aug 22 18:50:05.501: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.360862234s Aug 22 18:50:07.845: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Pending", Reason="", readiness=false. Elapsed: 5.704212809s Aug 22 18:50:10.079: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Pending", Reason="", readiness=false. Elapsed: 7.938559974s Aug 22 18:50:12.329: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188859927s Aug 22 18:50:14.616: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Running", Reason="", readiness=true. Elapsed: 12.475916191s Aug 22 18:50:17.866: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.724948112s STEP: Saw pod success Aug 22 18:50:17.866: INFO: Pod "pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97" satisfied condition "success or failure" Aug 22 18:50:18.872: INFO: Trying to get logs from node jerma-worker2 pod pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97 container test-container: STEP: delete the pod Aug 22 18:50:20.813: INFO: Waiting for pod pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97 to disappear Aug 22 18:50:21.110: INFO: Pod pod-35cd827e-b1d5-4b83-806a-b6eb5c171a97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:50:21.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5503" for this suite. • [SLOW TEST:22.793 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":638,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:50:21.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:50:23.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9664" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":36,"skipped":660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:50:24.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0822 18:50:33.586200 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 22 18:50:33.586: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:50:33.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5758" for this suite. • [SLOW TEST:10.130 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":37,"skipped":692,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:50:34.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:50:36.000: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 22 18:50:41.597: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:50:42.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9212" for this suite. • [SLOW TEST:10.041 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":38,"skipped":696,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:50:44.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:51:14.937: INFO: Container started at 2020-08-22 18:50:57 +0000 UTC, pod became ready at 2020-08-22 18:51:14 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:51:14.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9226" for this suite. • [SLOW TEST:31.050 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:51:15.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6561 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6561 I0822 18:51:21.361532 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6561, replica count: 2 I0822 18:51:24.412018 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0822 18:51:27.412230 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0822 18:51:30.412488 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0822 18:51:33.412720 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0822 18:51:36.413035 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0822 18:51:39.413347 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 22 18:51:39.413: INFO: Creating new exec pod Aug 22 18:51:51.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6561 execpodvhgbv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 22 18:52:22.565: INFO: stderr: "I0822 18:52:22.502556 835 log.go:172] (0xc0000f7760) (0xc000771f40) Create stream\nI0822 18:52:22.502587 835 log.go:172] (0xc0000f7760) (0xc000771f40) Stream added, broadcasting: 1\nI0822 18:52:22.504930 835 log.go:172] (0xc0000f7760) Reply frame received for 1\nI0822 18:52:22.505062 835 log.go:172] (0xc0000f7760) (0xc0006be6e0) Create stream\nI0822 18:52:22.505080 835 log.go:172] (0xc0000f7760) (0xc0006be6e0) Stream added, broadcasting: 3\nI0822 18:52:22.505985 835 log.go:172] (0xc0000f7760) Reply frame received for 3\nI0822 18:52:22.506015 835 log.go:172] (0xc0000f7760) (0xc0007174a0) Create stream\nI0822 18:52:22.506030 835 log.go:172] (0xc0000f7760) (0xc0007174a0) Stream added, broadcasting: 5\nI0822 18:52:22.506945 835 log.go:172] (0xc0000f7760) Reply frame received for 5\nI0822 18:52:22.555988 835 log.go:172] (0xc0000f7760) Data frame received for 5\nI0822 18:52:22.556019 835 log.go:172] (0xc0007174a0) (5) Data frame handling\nI0822 18:52:22.556038 835 log.go:172] (0xc0007174a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0822 18:52:22.556396 835 log.go:172] (0xc0000f7760) Data frame received for 5\nI0822 18:52:22.556411 835 log.go:172] (0xc0007174a0) (5) Data frame handling\nI0822 18:52:22.556430 835 log.go:172] (0xc0007174a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0822 18:52:22.556555 835 log.go:172] (0xc0000f7760) Data frame received for 3\nI0822 18:52:22.556572 835 log.go:172] (0xc0006be6e0) (3) Data frame handling\nI0822 18:52:22.556866 835 log.go:172] (0xc0000f7760) Data frame received for 5\nI0822 18:52:22.556880 835 log.go:172] (0xc0007174a0) (5) Data frame handling\nI0822 18:52:22.557964 835 log.go:172] (0xc0000f7760) Data frame received for 1\nI0822 18:52:22.557973 835 log.go:172] (0xc000771f40) (1) Data frame handling\nI0822 18:52:22.557988 835 log.go:172] (0xc000771f40) (1) Data frame sent\nI0822 18:52:22.558004 835 log.go:172] (0xc0000f7760) (0xc000771f40) Stream removed, broadcasting: 1\nI0822 18:52:22.558149 835 log.go:172] (0xc0000f7760) Go away received\nI0822 18:52:22.558312 835 log.go:172] (0xc0000f7760) (0xc000771f40) Stream removed, broadcasting: 1\nI0822 18:52:22.558323 835 log.go:172] (0xc0000f7760) (0xc0006be6e0) Stream removed, broadcasting: 3\nI0822 18:52:22.558327 835 log.go:172] (0xc0000f7760) (0xc0007174a0) Stream removed, broadcasting: 5\n" Aug 22 18:52:22.565: INFO: stdout: "" Aug 22 18:52:22.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6561 execpodvhgbv -- /bin/sh -x -c nc -zv -t -w 2 10.99.203.132 80' Aug 22 18:52:22.862: INFO: stderr: "I0822 18:52:22.800639 861 log.go:172] (0xc0003c0dc0) (0xc000a3a000) Create stream\nI0822 18:52:22.800698 861 log.go:172] (0xc0003c0dc0) (0xc000a3a000) Stream added, broadcasting: 1\nI0822 18:52:22.803033 861 log.go:172] (0xc0003c0dc0) Reply frame received for 1\nI0822 18:52:22.803092 861 log.go:172] (0xc0003c0dc0) (0xc000aa6000) Create stream\nI0822 18:52:22.803108 861 log.go:172] (0xc0003c0dc0) (0xc000aa6000) Stream added, broadcasting: 3\nI0822 18:52:22.803973 861 log.go:172] (0xc0003c0dc0) Reply frame received for 3\nI0822 18:52:22.804008 861 log.go:172] (0xc0003c0dc0) (0xc000aa60a0) Create stream\nI0822 18:52:22.804016 861 log.go:172] (0xc0003c0dc0) (0xc000aa60a0) Stream added, broadcasting: 5\nI0822 18:52:22.804929 861 log.go:172] (0xc0003c0dc0) Reply frame received for 5\nI0822 18:52:22.853447 861 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0822 18:52:22.853500 861 log.go:172] (0xc000aa60a0) (5) Data frame handling\nI0822 18:52:22.853519 861 log.go:172] (0xc000aa60a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.203.132 80\nConnection to 10.99.203.132 80 port [tcp/http] succeeded!\nI0822 18:52:22.853554 861 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0822 18:52:22.853595 861 log.go:172] (0xc000aa6000) (3) Data frame handling\nI0822 18:52:22.853619 861 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0822 18:52:22.853631 861 log.go:172] (0xc000aa60a0) (5) Data frame handling\nI0822 18:52:22.854874 861 log.go:172] (0xc0003c0dc0) Data frame received for 1\nI0822 18:52:22.854898 861 log.go:172] (0xc000a3a000) (1) Data frame handling\nI0822 18:52:22.854907 861 log.go:172] (0xc000a3a000) (1) Data frame sent\nI0822 18:52:22.854918 861 log.go:172] (0xc0003c0dc0) (0xc000a3a000) Stream removed, broadcasting: 1\nI0822 18:52:22.854933 861 log.go:172] (0xc0003c0dc0) Go away received\nI0822 18:52:22.855297 861 log.go:172] (0xc0003c0dc0) (0xc000a3a000) Stream removed, broadcasting: 1\nI0822 18:52:22.855313 861 log.go:172] (0xc0003c0dc0) (0xc000aa6000) Stream removed, broadcasting: 3\nI0822 18:52:22.855321 861 log.go:172] (0xc0003c0dc0) (0xc000aa60a0) Stream removed, broadcasting: 5\n" Aug 22 18:52:22.862: INFO: stdout: "" Aug 22 18:52:22.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6561 execpodvhgbv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30156' Aug 22 18:52:23.080: INFO: stderr: "I0822 18:52:23.007114 881 log.go:172] (0xc0000f7290) (0xc000b4e140) Create stream\nI0822 18:52:23.007186 881 log.go:172] (0xc0000f7290) (0xc000b4e140) Stream added, broadcasting: 1\nI0822 18:52:23.009651 881 log.go:172] (0xc0000f7290) Reply frame received for 1\nI0822 18:52:23.009709 881 log.go:172] (0xc0000f7290) (0xc0005a3ae0) Create stream\nI0822 18:52:23.009725 881 log.go:172] (0xc0000f7290) (0xc0005a3ae0) Stream added, broadcasting: 3\nI0822 18:52:23.010562 881 log.go:172] (0xc0000f7290) Reply frame received for 3\nI0822 18:52:23.010588 881 log.go:172] (0xc0000f7290) (0xc000b4e280) Create stream\nI0822 18:52:23.010596 881 log.go:172] (0xc0000f7290) (0xc000b4e280) Stream added, broadcasting: 5\nI0822 18:52:23.011528 881 log.go:172] (0xc0000f7290) Reply frame received for 5\nI0822 18:52:23.072370 881 log.go:172] (0xc0000f7290) Data frame received for 5\nI0822 18:52:23.072409 881 log.go:172] (0xc000b4e280) (5) Data frame handling\nI0822 18:52:23.072438 881 log.go:172] (0xc000b4e280) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 30156\nI0822 18:52:23.072842 881 log.go:172] (0xc0000f7290) Data frame received for 5\nI0822 18:52:23.072859 881 log.go:172] (0xc000b4e280) (5) Data frame handling\nI0822 18:52:23.072870 881 log.go:172] (0xc000b4e280) (5) Data frame sent\nConnection to 172.18.0.6 30156 port [tcp/30156] succeeded!\nI0822 18:52:23.073178 881 log.go:172] (0xc0000f7290) Data frame received for 3\nI0822 18:52:23.073204 881 log.go:172] (0xc0000f7290) Data frame received for 5\nI0822 18:52:23.073222 881 log.go:172] (0xc000b4e280) (5) Data frame handling\nI0822 18:52:23.073249 881 log.go:172] (0xc0005a3ae0) (3) Data frame handling\nI0822 18:52:23.074637 881 log.go:172] (0xc0000f7290) Data frame received for 1\nI0822 18:52:23.074659 881 log.go:172] (0xc000b4e140) (1) Data frame handling\nI0822 18:52:23.074668 881 log.go:172] (0xc000b4e140) (1) Data frame sent\nI0822 18:52:23.074678 881 log.go:172] (0xc0000f7290) (0xc000b4e140) Stream removed, broadcasting: 1\nI0822 18:52:23.074704 881 log.go:172] (0xc0000f7290) Go away received\nI0822 18:52:23.074961 881 log.go:172] (0xc0000f7290) (0xc000b4e140) Stream removed, broadcasting: 1\nI0822 18:52:23.074976 881 log.go:172] (0xc0000f7290) (0xc0005a3ae0) Stream removed, broadcasting: 3\nI0822 18:52:23.074986 881 log.go:172] (0xc0000f7290) (0xc000b4e280) Stream removed, broadcasting: 5\n" Aug 22 18:52:23.080: INFO: stdout: "" Aug 22 18:52:23.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6561 execpodvhgbv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30156' Aug 22 18:52:23.281: INFO: stderr: "I0822 18:52:23.198817 902 log.go:172] (0xc000978000) (0xc000516be0) Create stream\nI0822 18:52:23.198863 902 log.go:172] (0xc000978000) (0xc000516be0) Stream added, broadcasting: 1\nI0822 18:52:23.200812 902 log.go:172] (0xc000978000) Reply frame received for 1\nI0822 18:52:23.200852 902 log.go:172] (0xc000978000) (0xc00077a0a0) Create stream\nI0822 18:52:23.200876 902 log.go:172] (0xc000978000) (0xc00077a0a0) Stream added, broadcasting: 3\nI0822 18:52:23.201519 902 log.go:172] (0xc000978000) Reply frame received for 3\nI0822 18:52:23.201544 902 log.go:172] (0xc000978000) (0xc000930000) Create stream\nI0822 18:52:23.201552 902 log.go:172] (0xc000978000) (0xc000930000) Stream added, broadcasting: 5\nI0822 18:52:23.202099 902 log.go:172] (0xc000978000) Reply frame received for 5\nI0822 18:52:23.273662 902 log.go:172] (0xc000978000) Data frame received for 5\nI0822 18:52:23.273785 902 log.go:172] (0xc000930000) (5) Data frame handling\nI0822 18:52:23.273854 902 log.go:172] (0xc000930000) (5) Data frame sent\nI0822 18:52:23.273871 902 log.go:172] (0xc000978000) Data frame received for 5\nI0822 18:52:23.273884 902 log.go:172] (0xc000930000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 30156\nConnection to 172.18.0.3 30156 port [tcp/30156] succeeded!\nI0822 18:52:23.273907 902 log.go:172] (0xc000930000) (5) Data frame sent\nI0822 18:52:23.274152 902 log.go:172] (0xc000978000) Data frame received for 3\nI0822 18:52:23.274167 902 log.go:172] (0xc00077a0a0) (3) Data frame handling\nI0822 18:52:23.274331 902 log.go:172] (0xc000978000) Data frame received for 5\nI0822 18:52:23.274353 902 log.go:172] (0xc000930000) (5) Data frame handling\nI0822 18:52:23.275766 902 log.go:172] (0xc000978000) Data frame received for 1\nI0822 18:52:23.275787 902 log.go:172] (0xc000516be0) (1) Data frame handling\nI0822 18:52:23.275796 902 log.go:172] (0xc000516be0) (1) Data frame sent\nI0822 18:52:23.275991 902 log.go:172] (0xc000978000) (0xc000516be0) Stream removed, broadcasting: 1\nI0822 18:52:23.276020 902 log.go:172] (0xc000978000) Go away received\nI0822 18:52:23.276296 902 log.go:172] (0xc000978000) (0xc000516be0) Stream removed, broadcasting: 1\nI0822 18:52:23.276312 902 log.go:172] (0xc000978000) (0xc00077a0a0) Stream removed, broadcasting: 3\nI0822 18:52:23.276320 902 log.go:172] (0xc000978000) (0xc000930000) Stream removed, broadcasting: 5\n" Aug 22 18:52:23.281: INFO: stdout: "" Aug 22 18:52:23.281: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:52:23.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6561" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:68.278 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":40,"skipped":765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:52:23.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 22 18:52:24.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7541' Aug 22 18:52:24.683: INFO: stderr: "" Aug 22 18:52:24.683: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 22 18:52:24.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7541' Aug 22 18:52:24.778: INFO: stderr: "" Aug 22 18:52:24.778: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Aug 22 18:52:29.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7541' Aug 22 18:52:30.071: INFO: stderr: "" Aug 22 18:52:30.071: INFO: stdout: "update-demo-nautilus-2bqtj update-demo-nautilus-sz8bt " Aug 22 18:52:30.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bqtj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7541' Aug 22 18:52:30.758: INFO: stderr: "" Aug 22 18:52:30.758: INFO: stdout: "" Aug 22 18:52:30.758: INFO: update-demo-nautilus-2bqtj is created but not running Aug 22 18:52:35.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7541' Aug 22 18:52:36.124: INFO: stderr: "" Aug 22 18:52:36.124: INFO: stdout: "update-demo-nautilus-2bqtj update-demo-nautilus-sz8bt " Aug 22 18:52:36.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bqtj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7541' Aug 22 18:52:36.511: INFO: stderr: "" Aug 22 18:52:36.511: INFO: stdout: "true" Aug 22 18:52:36.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bqtj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7541' Aug 22 18:52:36.977: INFO: stderr: "" Aug 22 18:52:36.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:52:36.977: INFO: validating pod update-demo-nautilus-2bqtj Aug 22 18:52:37.238: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:52:37.238: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:52:37.238: INFO: update-demo-nautilus-2bqtj is verified up and running Aug 22 18:52:37.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sz8bt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7541' Aug 22 18:52:37.904: INFO: stderr: "" Aug 22 18:52:37.904: INFO: stdout: "true" Aug 22 18:52:37.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sz8bt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7541' Aug 22 18:52:38.728: INFO: stderr: "" Aug 22 18:52:38.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 22 18:52:38.728: INFO: validating pod update-demo-nautilus-sz8bt Aug 22 18:52:39.285: INFO: got data: { "image": "nautilus.jpg" } Aug 22 18:52:39.285: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 22 18:52:39.285: INFO: update-demo-nautilus-sz8bt is verified up and running STEP: using delete to clean up resources Aug 22 18:52:39.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7541' Aug 22 18:52:40.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 22 18:52:40.582: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 22 18:52:40.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7541' Aug 22 18:52:41.304: INFO: stderr: "No resources found in kubectl-7541 namespace.\n" Aug 22 18:52:41.304: INFO: stdout: "" Aug 22 18:52:41.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7541 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 22 18:52:42.985: INFO: stderr: "" Aug 22 18:52:42.985: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:52:42.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7541" for this suite. • [SLOW TEST:20.426 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":41,"skipped":788,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:52:44.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 22 18:52:47.986: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:48.471: INFO: Number of nodes with available pods: 0 Aug 22 18:52:48.471: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:49.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:49.582: INFO: Number of nodes with available pods: 0 Aug 22 18:52:49.582: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:50.897: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:51.241: INFO: Number of nodes with available pods: 0 Aug 22 18:52:51.241: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:51.553: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:51.612: INFO: Number of nodes with available pods: 0 Aug 22 18:52:51.613: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:52.974: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:54.730: INFO: Number of nodes with available pods: 0 Aug 22 18:52:54.730: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:55.733: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:56.169: INFO: Number of nodes with available pods: 0 Aug 22 18:52:56.169: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:57.139: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:57.194: INFO: Number of nodes with available pods: 0 Aug 22 18:52:57.194: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:57.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:57.627: INFO: Number of nodes with available pods: 0 Aug 22 18:52:57.627: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:52:58.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:52:58.739: INFO: Number of nodes with available pods: 1 Aug 22 18:52:58.739: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:00.092: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:00.271: INFO: Number of nodes with available pods: 1 Aug 22 18:53:00.271: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:00.476: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:00.479: INFO: Number of nodes with available pods: 1 Aug 22 18:53:00.479: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:02.243: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:02.356: INFO: Number of nodes with available pods: 2 Aug 22 18:53:02.356: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 22 18:53:02.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:03.128: INFO: Number of nodes with available pods: 2 Aug 22 18:53:03.128: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1952, will wait for the garbage collector to delete the pods Aug 22 18:53:06.712: INFO: Deleting DaemonSet.extensions daemon-set took: 919.157366ms Aug 22 18:53:08.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.900312203s Aug 22 18:53:18.337: INFO: Number of nodes with available pods: 0 Aug 22 18:53:18.338: INFO: Number of running nodes: 0, number of available pods: 0 Aug 22 18:53:18.630: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1952/daemonsets","resourceVersion":"2538506"},"items":null} Aug 22 18:53:19.236: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1952/pods","resourceVersion":"2538509"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:53:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1952" for this suite. • [SLOW TEST:37.167 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":42,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:53:21.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:53:24.943: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 22 18:53:25.309: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:25.807: INFO: Number of nodes with available pods: 0 Aug 22 18:53:25.807: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:26.865: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:26.925: INFO: Number of nodes with available pods: 0 Aug 22 18:53:26.925: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:27.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:28.555: INFO: Number of nodes with available pods: 0 Aug 22 18:53:28.555: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:29.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:29.422: INFO: Number of nodes with available pods: 0 Aug 22 18:53:29.422: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:30.261: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:30.633: INFO: Number of nodes with available pods: 0 Aug 22 18:53:30.633: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:31.099: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:31.101: INFO: Number of nodes with available pods: 0 Aug 22 18:53:31.101: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:31.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:32.087: INFO: Number of nodes with available pods: 0 Aug 22 18:53:32.087: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:53:33.052: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:33.392: INFO: Number of nodes with available pods: 1 Aug 22 18:53:33.392: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:53:33.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:34.100: INFO: Number of nodes with available pods: 1 Aug 22 18:53:34.100: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:53:34.879: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:34.985: INFO: Number of nodes with available pods: 2 Aug 22 18:53:34.985: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 22 18:53:35.403: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:35.403: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:36.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:38.368: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:38.368: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:38.372: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:39.240: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:39.240: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:40.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:41.650: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:41.650: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:42.434: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:43.388: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:43.388: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:43.388: INFO: Pod daemon-set-z4xg2 is not available Aug 22 18:53:43.453: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:43.655: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:43.655: INFO: Wrong image for pod: daemon-set-z4xg2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:43.655: INFO: Pod daemon-set-z4xg2 is not available Aug 22 18:53:44.089: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:45.344: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:45.344: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:46.594: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:48.141: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:48.141: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:48.644: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:50.269: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:50.269: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:50.273: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:51.021: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:51.021: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:51.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:52.632: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:52.633: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:52.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:53.733: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:53.733: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:55.482: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:56.137: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:56.137: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:57.668: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:53:58.655: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:53:58.655: INFO: Pod daemon-set-zld5x is not available Aug 22 18:53:58.866: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:00.021: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:00.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:01.636: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:01.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:02.523: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:02.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:03.688: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:03.692: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:04.523: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:04.563: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:05.848: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:05.848: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:06.106: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:06.537: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:06.537: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:06.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:07.640: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:07.640: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:07.649: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:08.692: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:08.692: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:09.184: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:09.734: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:09.735: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:10.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:11.057: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:11.057: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:11.348: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:12.045: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:12.045: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:12.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:12.987: INFO: Wrong image for pod: daemon-set-pc4p7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 22 18:54:12.987: INFO: Pod daemon-set-pc4p7 is not available Aug 22 18:54:13.435: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:14.411: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:14.962: INFO: Pod daemon-set-f4vfr is not available Aug 22 18:54:15.243: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 22 18:54:15.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:15.495: INFO: Number of nodes with available pods: 1 Aug 22 18:54:15.495: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:16.865: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:17.171: INFO: Number of nodes with available pods: 1 Aug 22 18:54:17.171: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:17.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:18.734: INFO: Number of nodes with available pods: 1 Aug 22 18:54:18.734: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:19.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:19.771: INFO: Number of nodes with available pods: 1 Aug 22 18:54:19.771: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:20.951: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:21.573: INFO: Number of nodes with available pods: 1 Aug 22 18:54:21.573: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:22.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:23.189: INFO: Number of nodes with available pods: 1 Aug 22 18:54:23.189: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:24.106: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:24.348: INFO: Number of nodes with available pods: 1 Aug 22 18:54:24.348: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:24.572: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:24.576: INFO: Number of nodes with available pods: 1 Aug 22 18:54:24.576: INFO: Node jerma-worker2 is running more than one daemon pod Aug 22 18:54:25.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:54:25.678: INFO: Number of nodes with available pods: 2 Aug 22 18:54:25.678: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6122, will wait for the garbage collector to delete the pods Aug 22 18:54:28.672: INFO: Deleting DaemonSet.extensions daemon-set took: 265.114851ms Aug 22 18:54:30.272: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.600342925s Aug 22 18:54:53.441: INFO: Number of nodes with available pods: 0 Aug 22 18:54:53.441: INFO: Number of running nodes: 0, number of available pods: 0 Aug 22 18:54:53.444: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6122/daemonsets","resourceVersion":"2539104"},"items":null} Aug 22 18:54:53.446: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6122/pods","resourceVersion":"2539104"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:54:53.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6122" for this suite. • [SLOW TEST:92.060 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":43,"skipped":835,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:54:53.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0ce3e378-3adb-47f7-aea9-5e88ebcf5d8a STEP: Creating a pod to test consume secrets Aug 22 18:54:56.383: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d" in namespace "secrets-9348" to be "success or failure" Aug 22 18:54:56.406: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.752333ms Aug 22 18:54:58.799: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416168839s Aug 22 18:55:01.017: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.633747501s Aug 22 18:55:03.267: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.883660841s Aug 22 18:55:06.521: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137362154s Aug 22 18:55:08.641: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Running", Reason="", readiness=true. Elapsed: 12.257525691s Aug 22 18:55:10.722: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.338793738s STEP: Saw pod success Aug 22 18:55:10.722: INFO: Pod "pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d" satisfied condition "success or failure" Aug 22 18:55:10.725: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d container secret-volume-test: STEP: delete the pod Aug 22 18:55:11.353: INFO: Waiting for pod pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d to disappear Aug 22 18:55:11.877: INFO: Pod pod-secrets-bc6280e0-1c11-4479-95ba-b9b943909c8d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:55:11.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9348" for this suite. STEP: Destroying namespace "secret-namespace-286" for this suite. • [SLOW TEST:18.707 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":835,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:55:12.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-cffbca1f-8720-4a26-9618-cdf921c2be51 STEP: Creating a pod to test consume configMaps Aug 22 18:55:13.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665" in namespace "projected-5637" to be "success or failure" Aug 22 18:55:13.458: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Pending", Reason="", readiness=false. Elapsed: 311.220465ms Aug 22 18:55:15.489: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341944138s Aug 22 18:55:17.598: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450938694s Aug 22 18:55:20.166: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Pending", Reason="", readiness=false. Elapsed: 7.018793947s Aug 22 18:55:22.191: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Running", Reason="", readiness=true. Elapsed: 9.044266198s Aug 22 18:55:24.339: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.191340798s STEP: Saw pod success Aug 22 18:55:24.339: INFO: Pod "pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665" satisfied condition "success or failure" Aug 22 18:55:24.341: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665 container projected-configmap-volume-test: STEP: delete the pod Aug 22 18:55:24.634: INFO: Waiting for pod pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665 to disappear Aug 22 18:55:24.937: INFO: Pod pod-projected-configmaps-1eaa8246-e7e1-4a85-957e-f3ad52a88665 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:55:24.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5637" for this suite. • [SLOW TEST:12.777 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":844,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:55:24.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:56:27.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3900" for this suite. • [SLOW TEST:62.534 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":846,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:56:27.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-903a348a-5bca-4c13-9866-9b343c32a606 STEP: Creating a pod to test consume configMaps Aug 22 18:56:29.279: INFO: Waiting up to 5m0s for pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c" in namespace "configmap-3919" to be "success or failure" Aug 22 18:56:29.530: INFO: Pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c": Phase="Pending", Reason="", readiness=false. Elapsed: 250.930662ms Aug 22 18:56:31.535: INFO: Pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255418315s Aug 22 18:56:33.554: INFO: Pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274596457s Aug 22 18:56:35.805: INFO: Pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.525281454s STEP: Saw pod success Aug 22 18:56:35.805: INFO: Pod "pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c" satisfied condition "success or failure" Aug 22 18:56:36.093: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c container configmap-volume-test: STEP: delete the pod Aug 22 18:56:37.258: INFO: Waiting for pod pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c to disappear Aug 22 18:56:37.375: INFO: Pod pod-configmaps-a814e38f-4e8e-48d0-b71d-f6491796553c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:56:37.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3919" for this suite. • [SLOW TEST:10.177 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":854,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:56:37.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3663ddd6-59c2-4e31-9a9f-821bcc9e088a STEP: Creating a pod to test consume secrets Aug 22 18:56:38.328: INFO: Waiting up to 5m0s for pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74" in namespace "secrets-7504" to be "success or failure" Aug 22 18:56:38.374: INFO: Pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74": Phase="Pending", Reason="", readiness=false. Elapsed: 45.951061ms Aug 22 18:56:41.022: INFO: Pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694729359s Aug 22 18:56:43.093: INFO: Pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74": Phase="Running", Reason="", readiness=true. Elapsed: 4.765132634s Aug 22 18:56:45.097: INFO: Pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.769386938s STEP: Saw pod success Aug 22 18:56:45.097: INFO: Pod "pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74" satisfied condition "success or failure" Aug 22 18:56:45.101: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74 container secret-volume-test: STEP: delete the pod Aug 22 18:56:45.281: INFO: Waiting for pod pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74 to disappear Aug 22 18:56:45.295: INFO: Pod pod-secrets-f237967b-dc9d-4aeb-9bd4-82ff9b168e74 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:56:45.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7504" for this suite. • [SLOW TEST:7.644 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":855,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:56:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 18:56:45.469: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f" in namespace "security-context-test-9397" to be "success or failure" Aug 22 18:56:45.533: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.101753ms Aug 22 18:56:48.065: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595468253s Aug 22 18:56:50.069: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599492468s Aug 22 18:56:52.514: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": Phase="Running", Reason="", readiness=true. Elapsed: 7.044500809s Aug 22 18:56:54.627: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.158044851s Aug 22 18:56:54.627: INFO: Pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f" satisfied condition "success or failure" Aug 22 18:56:54.635: INFO: Got logs for pod "busybox-privileged-false-0713eed9-a609-487f-905a-2e3e05842e7f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:56:54.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9397" for this suite. • [SLOW TEST:9.339 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":865,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:56:54.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 22 18:56:57.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f" in namespace "projected-7122" to be "success or failure" Aug 22 18:56:57.225: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.421628ms Aug 22 18:56:59.714: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520247755s Aug 22 18:57:01.813: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.619228211s Aug 22 18:57:03.829: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634708954s Aug 22 18:57:06.274: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.079989539s STEP: Saw pod success Aug 22 18:57:06.274: INFO: Pod "downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f" satisfied condition "success or failure" Aug 22 18:57:06.279: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f container client-container: STEP: delete the pod Aug 22 18:57:06.919: INFO: Waiting for pod downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f to disappear Aug 22 18:57:07.603: INFO: Pod downwardapi-volume-92a89107-83e9-415b-835f-e9bf639c069f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:57:07.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7122" for this suite. • [SLOW TEST:13.764 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":878,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:57:08.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6582 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 22 18:57:09.922: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 22 18:57:43.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:8080/dial?request=hostname&protocol=udp&host=10.244.2.62&port=8081&tries=1'] Namespace:pod-network-test-6582 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:57:43.769: INFO: >>> kubeConfig: /root/.kube/config I0822 18:57:43.802911 6 log.go:172] (0xc001b76580) (0xc002783860) Create stream I0822 18:57:43.802939 6 log.go:172] (0xc001b76580) (0xc002783860) Stream added, broadcasting: 1 I0822 18:57:43.804653 6 log.go:172] (0xc001b76580) Reply frame received for 1 I0822 18:57:43.804700 6 log.go:172] (0xc001b76580) (0xc002e6c0a0) Create stream I0822 18:57:43.804717 6 log.go:172] (0xc001b76580) (0xc002e6c0a0) Stream added, broadcasting: 3 I0822 18:57:43.805759 6 log.go:172] (0xc001b76580) Reply frame received for 3 I0822 18:57:43.805786 6 log.go:172] (0xc001b76580) (0xc002fdda40) Create stream I0822 18:57:43.805797 6 log.go:172] (0xc001b76580) (0xc002fdda40) Stream added, broadcasting: 5 I0822 18:57:43.806593 6 log.go:172] (0xc001b76580) Reply frame received for 5 I0822 18:57:43.874530 6 log.go:172] (0xc001b76580) Data frame received for 3 I0822 18:57:43.874562 6 log.go:172] (0xc002e6c0a0) (3) Data frame handling I0822 18:57:43.874582 6 log.go:172] (0xc002e6c0a0) (3) Data frame sent I0822 18:57:43.875072 6 log.go:172] (0xc001b76580) Data frame received for 5 I0822 18:57:43.875090 6 log.go:172] (0xc002fdda40) (5) Data frame handling I0822 18:57:43.875107 6 log.go:172] (0xc001b76580) Data frame received for 3 I0822 18:57:43.875115 6 log.go:172] (0xc002e6c0a0) (3) Data frame handling I0822 18:57:43.877062 6 log.go:172] (0xc001b76580) Data frame received for 1 I0822 18:57:43.877082 6 log.go:172] (0xc002783860) (1) Data frame handling I0822 18:57:43.877094 6 log.go:172] (0xc002783860) (1) Data frame sent I0822 18:57:43.877159 6 log.go:172] (0xc001b76580) (0xc002783860) Stream removed, broadcasting: 1 I0822 18:57:43.877204 6 log.go:172] (0xc001b76580) Go away received I0822 18:57:43.877258 6 log.go:172] (0xc001b76580) (0xc002783860) Stream removed, broadcasting: 1 I0822 18:57:43.877280 6 log.go:172] (0xc001b76580) (0xc002e6c0a0) Stream removed, broadcasting: 3 I0822 18:57:43.877296 6 log.go:172] (0xc001b76580) (0xc002fdda40) Stream removed, broadcasting: 5 Aug 22 18:57:43.877: INFO: Waiting for responses: map[] Aug 22 18:57:43.884: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:8080/dial?request=hostname&protocol=udp&host=10.244.1.57&port=8081&tries=1'] Namespace:pod-network-test-6582 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:57:43.884: INFO: >>> kubeConfig: /root/.kube/config I0822 18:57:43.909410 6 log.go:172] (0xc002c55b80) (0xc002e6c500) Create stream I0822 18:57:43.909433 6 log.go:172] (0xc002c55b80) (0xc002e6c500) Stream added, broadcasting: 1 I0822 18:57:43.910652 6 log.go:172] (0xc002c55b80) Reply frame received for 1 I0822 18:57:43.910691 6 log.go:172] (0xc002c55b80) (0xc002e6c5a0) Create stream I0822 18:57:43.910709 6 log.go:172] (0xc002c55b80) (0xc002e6c5a0) Stream added, broadcasting: 3 I0822 18:57:43.911354 6 log.go:172] (0xc002c55b80) Reply frame received for 3 I0822 18:57:43.911387 6 log.go:172] (0xc002c55b80) (0xc002003f40) Create stream I0822 18:57:43.911399 6 log.go:172] (0xc002c55b80) (0xc002003f40) Stream added, broadcasting: 5 I0822 18:57:43.912188 6 log.go:172] (0xc002c55b80) Reply frame received for 5 I0822 18:57:44.000483 6 log.go:172] (0xc002c55b80) Data frame received for 3 I0822 18:57:44.000510 6 log.go:172] (0xc002e6c5a0) (3) Data frame handling I0822 18:57:44.000522 6 log.go:172] (0xc002e6c5a0) (3) Data frame sent I0822 18:57:44.000956 6 log.go:172] (0xc002c55b80) Data frame received for 5 I0822 18:57:44.000981 6 log.go:172] (0xc002003f40) (5) Data frame handling I0822 18:57:44.000999 6 log.go:172] (0xc002c55b80) Data frame received for 3 I0822 18:57:44.001010 6 log.go:172] (0xc002e6c5a0) (3) Data frame handling I0822 18:57:44.002034 6 log.go:172] (0xc002c55b80) Data frame received for 1 I0822 18:57:44.002139 6 log.go:172] (0xc002e6c500) (1) Data frame handling I0822 18:57:44.002173 6 log.go:172] (0xc002e6c500) (1) Data frame sent I0822 18:57:44.002187 6 log.go:172] (0xc002c55b80) (0xc002e6c500) Stream removed, broadcasting: 1 I0822 18:57:44.002198 6 log.go:172] (0xc002c55b80) Go away received I0822 18:57:44.002319 6 log.go:172] (0xc002c55b80) (0xc002e6c500) Stream removed, broadcasting: 1 I0822 18:57:44.002336 6 log.go:172] (0xc002c55b80) (0xc002e6c5a0) Stream removed, broadcasting: 3 I0822 18:57:44.002345 6 log.go:172] (0xc002c55b80) (0xc002003f40) Stream removed, broadcasting: 5 Aug 22 18:57:44.002: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:57:44.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6582" for this suite. • [SLOW TEST:35.603 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":892,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:57:44.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 22 18:57:45.742: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 22 18:57:47.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719466, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 22 18:57:49.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719466, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 22 18:57:51.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719466, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719465, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 22 18:57:55.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:57:59.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8442" for this suite. STEP: Destroying namespace "webhook-8442-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.480 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":52,"skipped":902,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:57:59.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 22 18:57:59.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 22 18:57:59.941: INFO: Waiting for terminating namespaces to be deleted... Aug 22 18:57:59.987: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 22 18:57:59.993: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 22 18:57:59.993: INFO: Container app ready: true, restart count 0 Aug 22 18:57:59.993: INFO: rally-5661cb11-lhxe2h4q from c-rally-5661cb11-u8a7rhm3 started at 2020-08-22 18:57:17 +0000 UTC (1 container statuses recorded) Aug 22 18:57:59.993: INFO: Container rally-5661cb11-lhxe2h4q ready: true, restart count 0 Aug 22 18:57:59.993: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:57:59.993: INFO: Container kube-proxy ready: true, restart count 0 Aug 22 18:57:59.993: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:57:59.993: INFO: Container kindnet-cni ready: true, restart count 0 Aug 22 18:57:59.993: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 22 18:58:00.051: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:58:00.051: INFO: Container kube-proxy ready: true, restart count 0 Aug 22 18:58:00.051: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 22 18:58:00.051: INFO: Container app ready: true, restart count 0 Aug 22 18:58:00.051: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 22 18:58:00.051: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Aug 22 18:58:00.648: INFO: Pod rally-5661cb11-lhxe2h4q requesting resource cpu=0m on Node jerma-worker Aug 22 18:58:00.649: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker Aug 22 18:58:00.649: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2 Aug 22 18:58:00.649: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2 Aug 22 18:58:00.649: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker Aug 22 18:58:00.649: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2 Aug 22 18:58:00.649: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Aug 22 18:58:00.649: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Aug 22 18:58:00.762: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e.162dac1c66da8788], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5740/filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e.162dac1d26b8faa7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e.162dac1dfea7eb6a], Reason = [Created], Message = [Created container filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e.162dac1e6565c995], Reason = [Started], Message = [Started container filler-pod-3e1a8afb-2e1d-4273-877f-624a8d9fd43e] STEP: Considering event: Type = [Normal], Name = [filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977.162dac1c730d69f4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5740/filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977.162dac1dad9007f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977.162dac1ea0f35a65], Reason = [Created], Message = [Created container filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977] STEP: Considering event: Type = [Normal], Name = [filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977.162dac1f2872468d], Reason = [Started], Message = [Started container filler-pod-cbd6238b-8eca-4d9c-bbf3-b7f78ec72977] STEP: Considering event: Type = [Warning], Name = [additional-pod.162dac1f91d2c3f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:58:15.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5740" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.221 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":53,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:58:15.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 22 18:58:15.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:15.884: INFO: Number of nodes with available pods: 0 Aug 22 18:58:15.884: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:16.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:16.894: INFO: Number of nodes with available pods: 0 Aug 22 18:58:16.894: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:18.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:18.521: INFO: Number of nodes with available pods: 0 Aug 22 18:58:18.521: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:19.210: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:19.213: INFO: Number of nodes with available pods: 0 Aug 22 18:58:19.213: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:20.036: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:20.038: INFO: Number of nodes with available pods: 0 Aug 22 18:58:20.038: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:20.898: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:20.903: INFO: Number of nodes with available pods: 0 Aug 22 18:58:20.903: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:22.150: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:22.154: INFO: Number of nodes with available pods: 2 Aug 22 18:58:22.154: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 22 18:58:22.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:22.689: INFO: Number of nodes with available pods: 1 Aug 22 18:58:22.689: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:23.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:23.877: INFO: Number of nodes with available pods: 1 Aug 22 18:58:23.877: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:24.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:24.970: INFO: Number of nodes with available pods: 1 Aug 22 18:58:24.970: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:25.970: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:25.973: INFO: Number of nodes with available pods: 1 Aug 22 18:58:25.973: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:26.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:26.698: INFO: Number of nodes with available pods: 1 Aug 22 18:58:26.698: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:27.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:27.838: INFO: Number of nodes with available pods: 1 Aug 22 18:58:27.838: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:28.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:28.722: INFO: Number of nodes with available pods: 1 Aug 22 18:58:28.722: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:29.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:29.698: INFO: Number of nodes with available pods: 1 Aug 22 18:58:29.698: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:30.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:30.698: INFO: Number of nodes with available pods: 1 Aug 22 18:58:30.698: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:31.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:31.814: INFO: Number of nodes with available pods: 1 Aug 22 18:58:31.814: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:32.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:32.698: INFO: Number of nodes with available pods: 1 Aug 22 18:58:32.698: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:34.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:34.070: INFO: Number of nodes with available pods: 1 Aug 22 18:58:34.070: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:34.878: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:34.891: INFO: Number of nodes with available pods: 1 Aug 22 18:58:34.892: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:35.964: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:35.968: INFO: Number of nodes with available pods: 1 Aug 22 18:58:35.968: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:36.694: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:36.698: INFO: Number of nodes with available pods: 1 Aug 22 18:58:36.698: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:37.899: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:37.902: INFO: Number of nodes with available pods: 1 Aug 22 18:58:37.902: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:39.037: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:39.103: INFO: Number of nodes with available pods: 1 Aug 22 18:58:39.103: INFO: Node jerma-worker is running more than one daemon pod Aug 22 18:58:39.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 22 18:58:39.919: INFO: Number of nodes with available pods: 2 Aug 22 18:58:39.919: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8813, will wait for the garbage collector to delete the pods Aug 22 18:58:40.452: INFO: Deleting DaemonSet.extensions daemon-set took: 45.678249ms Aug 22 18:58:41.052: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.253112ms Aug 22 18:58:51.855: INFO: Number of nodes with available pods: 0 Aug 22 18:58:51.855: INFO: Number of running nodes: 0, number of available pods: 0 Aug 22 18:58:51.858: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8813/daemonsets","resourceVersion":"2540379"},"items":null} Aug 22 18:58:51.860: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8813/pods","resourceVersion":"2540379"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:58:51.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8813" for this suite. • [SLOW TEST:36.166 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":54,"skipped":943,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:58:51.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 22 18:59:06.073: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.073: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.111468 6 log.go:172] (0xc001982210) (0xc002782640) Create stream I0822 18:59:06.111522 6 log.go:172] (0xc001982210) (0xc002782640) Stream added, broadcasting: 1 I0822 18:59:06.113700 6 log.go:172] (0xc001982210) Reply frame received for 1 I0822 18:59:06.113730 6 log.go:172] (0xc001982210) (0xc0023070e0) Create stream I0822 18:59:06.113744 6 log.go:172] (0xc001982210) (0xc0023070e0) Stream added, broadcasting: 3 I0822 18:59:06.114538 6 log.go:172] (0xc001982210) Reply frame received for 3 I0822 18:59:06.114600 6 log.go:172] (0xc001982210) (0xc002307180) Create stream I0822 18:59:06.114613 6 log.go:172] (0xc001982210) (0xc002307180) Stream added, broadcasting: 5 I0822 18:59:06.115379 6 log.go:172] (0xc001982210) Reply frame received for 5 I0822 18:59:06.198474 6 log.go:172] (0xc001982210) Data frame received for 3 I0822 18:59:06.198508 6 log.go:172] (0xc0023070e0) (3) Data frame handling I0822 18:59:06.198518 6 log.go:172] (0xc0023070e0) (3) Data frame sent I0822 18:59:06.198524 6 log.go:172] (0xc001982210) Data frame received for 3 I0822 18:59:06.198536 6 log.go:172] (0xc0023070e0) (3) Data frame handling I0822 18:59:06.198556 6 log.go:172] (0xc001982210) Data frame received for 5 I0822 18:59:06.198565 6 log.go:172] (0xc002307180) (5) Data frame handling I0822 18:59:06.199868 6 log.go:172] (0xc001982210) Data frame received for 1 I0822 18:59:06.199919 6 log.go:172] (0xc002782640) (1) Data frame handling I0822 18:59:06.199936 6 log.go:172] (0xc002782640) (1) Data frame sent I0822 18:59:06.199955 6 log.go:172] (0xc001982210) (0xc002782640) Stream removed, broadcasting: 1 I0822 18:59:06.199993 6 log.go:172] (0xc001982210) Go away received I0822 18:59:06.200169 6 log.go:172] (0xc001982210) (0xc002782640) Stream removed, broadcasting: 1 I0822 18:59:06.200186 6 log.go:172] (0xc001982210) (0xc0023070e0) Stream removed, broadcasting: 3 I0822 18:59:06.200193 6 log.go:172] (0xc001982210) (0xc002307180) Stream removed, broadcasting: 5 Aug 22 18:59:06.200: INFO: Exec stderr: "" Aug 22 18:59:06.200: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.200: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.233054 6 log.go:172] (0xc0064e0420) (0xc0028d9e00) Create stream I0822 18:59:06.233084 6 log.go:172] (0xc0064e0420) (0xc0028d9e00) Stream added, broadcasting: 1 I0822 18:59:06.236066 6 log.go:172] (0xc0064e0420) Reply frame received for 1 I0822 18:59:06.236133 6 log.go:172] (0xc0064e0420) (0xc0023072c0) Create stream I0822 18:59:06.236152 6 log.go:172] (0xc0064e0420) (0xc0023072c0) Stream added, broadcasting: 3 I0822 18:59:06.237077 6 log.go:172] (0xc0064e0420) Reply frame received for 3 I0822 18:59:06.237106 6 log.go:172] (0xc0064e0420) (0xc0023074a0) Create stream I0822 18:59:06.237124 6 log.go:172] (0xc0064e0420) (0xc0023074a0) Stream added, broadcasting: 5 I0822 18:59:06.237897 6 log.go:172] (0xc0064e0420) Reply frame received for 5 I0822 18:59:06.303675 6 log.go:172] (0xc0064e0420) Data frame received for 5 I0822 18:59:06.303729 6 log.go:172] (0xc0023074a0) (5) Data frame handling I0822 18:59:06.303859 6 log.go:172] (0xc0064e0420) Data frame received for 3 I0822 18:59:06.303932 6 log.go:172] (0xc0023072c0) (3) Data frame handling I0822 18:59:06.303955 6 log.go:172] (0xc0023072c0) (3) Data frame sent I0822 18:59:06.303977 6 log.go:172] (0xc0064e0420) Data frame received for 3 I0822 18:59:06.303996 6 log.go:172] (0xc0023072c0) (3) Data frame handling I0822 18:59:06.305327 6 log.go:172] (0xc0064e0420) Data frame received for 1 I0822 18:59:06.305346 6 log.go:172] (0xc0028d9e00) (1) Data frame handling I0822 18:59:06.305371 6 log.go:172] (0xc0028d9e00) (1) Data frame sent I0822 18:59:06.305382 6 log.go:172] (0xc0064e0420) (0xc0028d9e00) Stream removed, broadcasting: 1 I0822 18:59:06.305395 6 log.go:172] (0xc0064e0420) Go away received I0822 18:59:06.305521 6 log.go:172] (0xc0064e0420) (0xc0028d9e00) Stream removed, broadcasting: 1 I0822 18:59:06.305554 6 log.go:172] (0xc0064e0420) (0xc0023072c0) Stream removed, broadcasting: 3 I0822 18:59:06.305570 6 log.go:172] (0xc0064e0420) (0xc0023074a0) Stream removed, broadcasting: 5 Aug 22 18:59:06.305: INFO: Exec stderr: "" Aug 22 18:59:06.305: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.305: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.333872 6 log.go:172] (0xc001b768f0) (0xc002987720) Create stream I0822 18:59:06.333917 6 log.go:172] (0xc001b768f0) (0xc002987720) Stream added, broadcasting: 1 I0822 18:59:06.337228 6 log.go:172] (0xc001b768f0) Reply frame received for 1 I0822 18:59:06.337274 6 log.go:172] (0xc001b768f0) (0xc0028d9ea0) Create stream I0822 18:59:06.337290 6 log.go:172] (0xc001b768f0) (0xc0028d9ea0) Stream added, broadcasting: 3 I0822 18:59:06.338147 6 log.go:172] (0xc001b768f0) Reply frame received for 3 I0822 18:59:06.338197 6 log.go:172] (0xc001b768f0) (0xc0027826e0) Create stream I0822 18:59:06.338207 6 log.go:172] (0xc001b768f0) (0xc0027826e0) Stream added, broadcasting: 5 I0822 18:59:06.339061 6 log.go:172] (0xc001b768f0) Reply frame received for 5 I0822 18:59:06.404192 6 log.go:172] (0xc001b768f0) Data frame received for 3 I0822 18:59:06.404303 6 log.go:172] (0xc0028d9ea0) (3) Data frame handling I0822 18:59:06.404436 6 log.go:172] (0xc001b768f0) Data frame received for 5 I0822 18:59:06.404484 6 log.go:172] (0xc0027826e0) (5) Data frame handling I0822 18:59:06.404519 6 log.go:172] (0xc0028d9ea0) (3) Data frame sent I0822 18:59:06.404536 6 log.go:172] (0xc001b768f0) Data frame received for 3 I0822 18:59:06.404549 6 log.go:172] (0xc0028d9ea0) (3) Data frame handling I0822 18:59:06.405837 6 log.go:172] (0xc001b768f0) Data frame received for 1 I0822 18:59:06.405857 6 log.go:172] (0xc002987720) (1) Data frame handling I0822 18:59:06.405870 6 log.go:172] (0xc002987720) (1) Data frame sent I0822 18:59:06.405882 6 log.go:172] (0xc001b768f0) (0xc002987720) Stream removed, broadcasting: 1 I0822 18:59:06.405946 6 log.go:172] (0xc001b768f0) Go away received I0822 18:59:06.405979 6 log.go:172] (0xc001b768f0) (0xc002987720) Stream removed, broadcasting: 1 I0822 18:59:06.405997 6 log.go:172] (0xc001b768f0) (0xc0028d9ea0) Stream removed, broadcasting: 3 I0822 18:59:06.406010 6 log.go:172] (0xc001b768f0) (0xc0027826e0) Stream removed, broadcasting: 5 Aug 22 18:59:06.406: INFO: Exec stderr: "" Aug 22 18:59:06.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.406: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.435466 6 log.go:172] (0xc0064e09a0) (0xc002fdc000) Create stream I0822 18:59:06.435494 6 log.go:172] (0xc0064e09a0) (0xc002fdc000) Stream added, broadcasting: 1 I0822 18:59:06.437639 6 log.go:172] (0xc0064e09a0) Reply frame received for 1 I0822 18:59:06.437681 6 log.go:172] (0xc0064e09a0) (0xc0029877c0) Create stream I0822 18:59:06.437698 6 log.go:172] (0xc0064e09a0) (0xc0029877c0) Stream added, broadcasting: 3 I0822 18:59:06.438754 6 log.go:172] (0xc0064e09a0) Reply frame received for 3 I0822 18:59:06.438787 6 log.go:172] (0xc0064e09a0) (0xc002987860) Create stream I0822 18:59:06.438799 6 log.go:172] (0xc0064e09a0) (0xc002987860) Stream added, broadcasting: 5 I0822 18:59:06.439651 6 log.go:172] (0xc0064e09a0) Reply frame received for 5 I0822 18:59:06.511045 6 log.go:172] (0xc0064e09a0) Data frame received for 5 I0822 18:59:06.511084 6 log.go:172] (0xc002987860) (5) Data frame handling I0822 18:59:06.511107 6 log.go:172] (0xc0064e09a0) Data frame received for 3 I0822 18:59:06.511119 6 log.go:172] (0xc0029877c0) (3) Data frame handling I0822 18:59:06.511131 6 log.go:172] (0xc0029877c0) (3) Data frame sent I0822 18:59:06.511140 6 log.go:172] (0xc0064e09a0) Data frame received for 3 I0822 18:59:06.511149 6 log.go:172] (0xc0029877c0) (3) Data frame handling I0822 18:59:06.512298 6 log.go:172] (0xc0064e09a0) Data frame received for 1 I0822 18:59:06.512312 6 log.go:172] (0xc002fdc000) (1) Data frame handling I0822 18:59:06.512324 6 log.go:172] (0xc002fdc000) (1) Data frame sent I0822 18:59:06.512339 6 log.go:172] (0xc0064e09a0) (0xc002fdc000) Stream removed, broadcasting: 1 I0822 18:59:06.512374 6 log.go:172] (0xc0064e09a0) Go away received I0822 18:59:06.512464 6 log.go:172] (0xc0064e09a0) (0xc002fdc000) Stream removed, broadcasting: 1 I0822 18:59:06.512480 6 log.go:172] (0xc0064e09a0) (0xc0029877c0) Stream removed, broadcasting: 3 I0822 18:59:06.512492 6 log.go:172] (0xc0064e09a0) (0xc002987860) Stream removed, broadcasting: 5 Aug 22 18:59:06.512: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 22 18:59:06.512: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.512: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.538172 6 log.go:172] (0xc0064e1080) (0xc002fdc320) Create stream I0822 18:59:06.538200 6 log.go:172] (0xc0064e1080) (0xc002fdc320) Stream added, broadcasting: 1 I0822 18:59:06.540468 6 log.go:172] (0xc0064e1080) Reply frame received for 1 I0822 18:59:06.540494 6 log.go:172] (0xc0064e1080) (0xc002307720) Create stream I0822 18:59:06.540503 6 log.go:172] (0xc0064e1080) (0xc002307720) Stream added, broadcasting: 3 I0822 18:59:06.541380 6 log.go:172] (0xc0064e1080) Reply frame received for 3 I0822 18:59:06.541413 6 log.go:172] (0xc0064e1080) (0xc0023077c0) Create stream I0822 18:59:06.541445 6 log.go:172] (0xc0064e1080) (0xc0023077c0) Stream added, broadcasting: 5 I0822 18:59:06.542107 6 log.go:172] (0xc0064e1080) Reply frame received for 5 I0822 18:59:06.605686 6 log.go:172] (0xc0064e1080) Data frame received for 5 I0822 18:59:06.605728 6 log.go:172] (0xc0023077c0) (5) Data frame handling I0822 18:59:06.605750 6 log.go:172] (0xc0064e1080) Data frame received for 3 I0822 18:59:06.605766 6 log.go:172] (0xc002307720) (3) Data frame handling I0822 18:59:06.605791 6 log.go:172] (0xc002307720) (3) Data frame sent I0822 18:59:06.605804 6 log.go:172] (0xc0064e1080) Data frame received for 3 I0822 18:59:06.605814 6 log.go:172] (0xc002307720) (3) Data frame handling I0822 18:59:06.606987 6 log.go:172] (0xc0064e1080) Data frame received for 1 I0822 18:59:06.607016 6 log.go:172] (0xc002fdc320) (1) Data frame handling I0822 18:59:06.607055 6 log.go:172] (0xc002fdc320) (1) Data frame sent I0822 18:59:06.607094 6 log.go:172] (0xc0064e1080) (0xc002fdc320) Stream removed, broadcasting: 1 I0822 18:59:06.607117 6 log.go:172] (0xc0064e1080) Go away received I0822 18:59:06.607205 6 log.go:172] (0xc0064e1080) (0xc002fdc320) Stream removed, broadcasting: 1 I0822 18:59:06.607223 6 log.go:172] (0xc0064e1080) (0xc002307720) Stream removed, broadcasting: 3 I0822 18:59:06.607238 6 log.go:172] (0xc0064e1080) (0xc0023077c0) Stream removed, broadcasting: 5 Aug 22 18:59:06.607: INFO: Exec stderr: "" Aug 22 18:59:06.607: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.607: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.635322 6 log.go:172] (0xc0064e16b0) (0xc002fdc780) Create stream I0822 18:59:06.635354 6 log.go:172] (0xc0064e16b0) (0xc002fdc780) Stream added, broadcasting: 1 I0822 18:59:06.638329 6 log.go:172] (0xc0064e16b0) Reply frame received for 1 I0822 18:59:06.638392 6 log.go:172] (0xc0064e16b0) (0xc002307900) Create stream I0822 18:59:06.638417 6 log.go:172] (0xc0064e16b0) (0xc002307900) Stream added, broadcasting: 3 I0822 18:59:06.639235 6 log.go:172] (0xc0064e16b0) Reply frame received for 3 I0822 18:59:06.639273 6 log.go:172] (0xc0064e16b0) (0xc002002140) Create stream I0822 18:59:06.639294 6 log.go:172] (0xc0064e16b0) (0xc002002140) Stream added, broadcasting: 5 I0822 18:59:06.640381 6 log.go:172] (0xc0064e16b0) Reply frame received for 5 I0822 18:59:06.709126 6 log.go:172] (0xc0064e16b0) Data frame received for 5 I0822 18:59:06.709169 6 log.go:172] (0xc002002140) (5) Data frame handling I0822 18:59:06.709232 6 log.go:172] (0xc0064e16b0) Data frame received for 3 I0822 18:59:06.709270 6 log.go:172] (0xc002307900) (3) Data frame handling I0822 18:59:06.709319 6 log.go:172] (0xc002307900) (3) Data frame sent I0822 18:59:06.709335 6 log.go:172] (0xc0064e16b0) Data frame received for 3 I0822 18:59:06.709345 6 log.go:172] (0xc002307900) (3) Data frame handling I0822 18:59:06.710848 6 log.go:172] (0xc0064e16b0) Data frame received for 1 I0822 18:59:06.710884 6 log.go:172] (0xc002fdc780) (1) Data frame handling I0822 18:59:06.710906 6 log.go:172] (0xc002fdc780) (1) Data frame sent I0822 18:59:06.710932 6 log.go:172] (0xc0064e16b0) (0xc002fdc780) Stream removed, broadcasting: 1 I0822 18:59:06.711080 6 log.go:172] (0xc0064e16b0) (0xc002fdc780) Stream removed, broadcasting: 1 I0822 18:59:06.711123 6 log.go:172] (0xc0064e16b0) (0xc002307900) Stream removed, broadcasting: 3 I0822 18:59:06.711345 6 log.go:172] (0xc0064e16b0) (0xc002002140) Stream removed, broadcasting: 5 Aug 22 18:59:06.711: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 22 18:59:06.711: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.711: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.712990 6 log.go:172] (0xc0064e16b0) Go away received I0822 18:59:06.742803 6 log.go:172] (0xc0064e1ce0) (0xc002fdcaa0) Create stream I0822 18:59:06.742828 6 log.go:172] (0xc0064e1ce0) (0xc002fdcaa0) Stream added, broadcasting: 1 I0822 18:59:06.746038 6 log.go:172] (0xc0064e1ce0) Reply frame received for 1 I0822 18:59:06.746102 6 log.go:172] (0xc0064e1ce0) (0xc0020021e0) Create stream I0822 18:59:06.746131 6 log.go:172] (0xc0064e1ce0) (0xc0020021e0) Stream added, broadcasting: 3 I0822 18:59:06.747225 6 log.go:172] (0xc0064e1ce0) Reply frame received for 3 I0822 18:59:06.747245 6 log.go:172] (0xc0064e1ce0) (0xc002987900) Create stream I0822 18:59:06.747252 6 log.go:172] (0xc0064e1ce0) (0xc002987900) Stream added, broadcasting: 5 I0822 18:59:06.748158 6 log.go:172] (0xc0064e1ce0) Reply frame received for 5 I0822 18:59:06.812548 6 log.go:172] (0xc0064e1ce0) Data frame received for 3 I0822 18:59:06.812571 6 log.go:172] (0xc0020021e0) (3) Data frame handling I0822 18:59:06.812593 6 log.go:172] (0xc0064e1ce0) Data frame received for 5 I0822 18:59:06.812631 6 log.go:172] (0xc002987900) (5) Data frame handling I0822 18:59:06.812671 6 log.go:172] (0xc0020021e0) (3) Data frame sent I0822 18:59:06.812695 6 log.go:172] (0xc0064e1ce0) Data frame received for 3 I0822 18:59:06.812710 6 log.go:172] (0xc0020021e0) (3) Data frame handling I0822 18:59:06.814391 6 log.go:172] (0xc0064e1ce0) Data frame received for 1 I0822 18:59:06.814405 6 log.go:172] (0xc002fdcaa0) (1) Data frame handling I0822 18:59:06.814414 6 log.go:172] (0xc002fdcaa0) (1) Data frame sent I0822 18:59:06.814438 6 log.go:172] (0xc0064e1ce0) (0xc002fdcaa0) Stream removed, broadcasting: 1 I0822 18:59:06.814504 6 log.go:172] (0xc0064e1ce0) Go away received I0822 18:59:06.814635 6 log.go:172] (0xc0064e1ce0) (0xc002fdcaa0) Stream removed, broadcasting: 1 I0822 18:59:06.814664 6 log.go:172] (0xc0064e1ce0) (0xc0020021e0) Stream removed, broadcasting: 3 I0822 18:59:06.814682 6 log.go:172] (0xc0064e1ce0) (0xc002987900) Stream removed, broadcasting: 5 Aug 22 18:59:06.814: INFO: Exec stderr: "" Aug 22 18:59:06.814: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.814: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.844576 6 log.go:172] (0xc002c55d90) (0xc002307c20) Create stream I0822 18:59:06.844603 6 log.go:172] (0xc002c55d90) (0xc002307c20) Stream added, broadcasting: 1 I0822 18:59:06.846616 6 log.go:172] (0xc002c55d90) Reply frame received for 1 I0822 18:59:06.846666 6 log.go:172] (0xc002c55d90) (0xc002987c20) Create stream I0822 18:59:06.846683 6 log.go:172] (0xc002c55d90) (0xc002987c20) Stream added, broadcasting: 3 I0822 18:59:06.847354 6 log.go:172] (0xc002c55d90) Reply frame received for 3 I0822 18:59:06.847381 6 log.go:172] (0xc002c55d90) (0xc002fdcb40) Create stream I0822 18:59:06.847389 6 log.go:172] (0xc002c55d90) (0xc002fdcb40) Stream added, broadcasting: 5 I0822 18:59:06.848028 6 log.go:172] (0xc002c55d90) Reply frame received for 5 I0822 18:59:06.902171 6 log.go:172] (0xc002c55d90) Data frame received for 5 I0822 18:59:06.902190 6 log.go:172] (0xc002fdcb40) (5) Data frame handling I0822 18:59:06.902221 6 log.go:172] (0xc002c55d90) Data frame received for 3 I0822 18:59:06.902244 6 log.go:172] (0xc002987c20) (3) Data frame handling I0822 18:59:06.902255 6 log.go:172] (0xc002987c20) (3) Data frame sent I0822 18:59:06.902266 6 log.go:172] (0xc002c55d90) Data frame received for 3 I0822 18:59:06.902271 6 log.go:172] (0xc002987c20) (3) Data frame handling I0822 18:59:06.903146 6 log.go:172] (0xc002c55d90) Data frame received for 1 I0822 18:59:06.903157 6 log.go:172] (0xc002307c20) (1) Data frame handling I0822 18:59:06.903168 6 log.go:172] (0xc002307c20) (1) Data frame sent I0822 18:59:06.903288 6 log.go:172] (0xc002c55d90) (0xc002307c20) Stream removed, broadcasting: 1 I0822 18:59:06.903311 6 log.go:172] (0xc002c55d90) Go away received I0822 18:59:06.903432 6 log.go:172] (0xc002c55d90) (0xc002307c20) Stream removed, broadcasting: 1 I0822 18:59:06.903449 6 log.go:172] (0xc002c55d90) (0xc002987c20) Stream removed, broadcasting: 3 I0822 18:59:06.903457 6 log.go:172] (0xc002c55d90) (0xc002fdcb40) Stream removed, broadcasting: 5 Aug 22 18:59:06.903: INFO: Exec stderr: "" Aug 22 18:59:06.903: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.903: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:06.925817 6 log.go:172] (0xc0029b20b0) (0xc002002460) Create stream I0822 18:59:06.925840 6 log.go:172] (0xc0029b20b0) (0xc002002460) Stream added, broadcasting: 1 I0822 18:59:06.927363 6 log.go:172] (0xc0029b20b0) Reply frame received for 1 I0822 18:59:06.927383 6 log.go:172] (0xc0029b20b0) (0xc002fdcc80) Create stream I0822 18:59:06.927390 6 log.go:172] (0xc0029b20b0) (0xc002fdcc80) Stream added, broadcasting: 3 I0822 18:59:06.927910 6 log.go:172] (0xc0029b20b0) Reply frame received for 3 I0822 18:59:06.927939 6 log.go:172] (0xc0029b20b0) (0xc002fdcd20) Create stream I0822 18:59:06.927950 6 log.go:172] (0xc0029b20b0) (0xc002fdcd20) Stream added, broadcasting: 5 I0822 18:59:06.928441 6 log.go:172] (0xc0029b20b0) Reply frame received for 5 I0822 18:59:06.986946 6 log.go:172] (0xc0029b20b0) Data frame received for 5 I0822 18:59:06.986985 6 log.go:172] (0xc002fdcd20) (5) Data frame handling I0822 18:59:06.987031 6 log.go:172] (0xc0029b20b0) Data frame received for 3 I0822 18:59:06.987058 6 log.go:172] (0xc002fdcc80) (3) Data frame handling I0822 18:59:06.987081 6 log.go:172] (0xc002fdcc80) (3) Data frame sent I0822 18:59:06.987099 6 log.go:172] (0xc0029b20b0) Data frame received for 3 I0822 18:59:06.987108 6 log.go:172] (0xc002fdcc80) (3) Data frame handling I0822 18:59:06.988020 6 log.go:172] (0xc0029b20b0) Data frame received for 1 I0822 18:59:06.988040 6 log.go:172] (0xc002002460) (1) Data frame handling I0822 18:59:06.988049 6 log.go:172] (0xc002002460) (1) Data frame sent I0822 18:59:06.988134 6 log.go:172] (0xc0029b20b0) (0xc002002460) Stream removed, broadcasting: 1 I0822 18:59:06.988165 6 log.go:172] (0xc0029b20b0) Go away received I0822 18:59:06.988284 6 log.go:172] (0xc0029b20b0) (0xc002002460) Stream removed, broadcasting: 1 I0822 18:59:06.988315 6 log.go:172] (0xc0029b20b0) (0xc002fdcc80) Stream removed, broadcasting: 3 I0822 18:59:06.988337 6 log.go:172] (0xc0029b20b0) (0xc002fdcd20) Stream removed, broadcasting: 5 Aug 22 18:59:06.988: INFO: Exec stderr: "" Aug 22 18:59:06.988: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4351 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 22 18:59:06.988: INFO: >>> kubeConfig: /root/.kube/config I0822 18:59:07.016977 6 log.go:172] (0xc001b76f20) (0xc002987ea0) Create stream I0822 18:59:07.017012 6 log.go:172] (0xc001b76f20) (0xc002987ea0) Stream added, broadcasting: 1 I0822 18:59:07.019653 6 log.go:172] (0xc001b76f20) Reply frame received for 1 I0822 18:59:07.019696 6 log.go:172] (0xc001b76f20) (0xc002987f40) Create stream I0822 18:59:07.019712 6 log.go:172] (0xc001b76f20) (0xc002987f40) Stream added, broadcasting: 3 I0822 18:59:07.020503 6 log.go:172] (0xc001b76f20) Reply frame received for 3 I0822 18:59:07.020541 6 log.go:172] (0xc001b76f20) (0xc002002640) Create stream I0822 18:59:07.020554 6 log.go:172] (0xc001b76f20) (0xc002002640) Stream added, broadcasting: 5 I0822 18:59:07.021308 6 log.go:172] (0xc001b76f20) Reply frame received for 5 I0822 18:59:07.078798 6 log.go:172] (0xc001b76f20) Data frame received for 5 I0822 18:59:07.078845 6 log.go:172] (0xc002002640) (5) Data frame handling I0822 18:59:07.078880 6 log.go:172] (0xc001b76f20) Data frame received for 3 I0822 18:59:07.078902 6 log.go:172] (0xc002987f40) (3) Data frame handling I0822 18:59:07.078933 6 log.go:172] (0xc002987f40) (3) Data frame sent I0822 18:59:07.078970 6 log.go:172] (0xc001b76f20) Data frame received for 3 I0822 18:59:07.078997 6 log.go:172] (0xc002987f40) (3) Data frame handling I0822 18:59:07.080101 6 log.go:172] (0xc001b76f20) Data frame received for 1 I0822 18:59:07.080122 6 log.go:172] (0xc002987ea0) (1) Data frame handling I0822 18:59:07.080134 6 log.go:172] (0xc002987ea0) (1) Data frame sent I0822 18:59:07.080158 6 log.go:172] (0xc001b76f20) (0xc002987ea0) Stream removed, broadcasting: 1 I0822 18:59:07.080185 6 log.go:172] (0xc001b76f20) Go away received I0822 18:59:07.080267 6 log.go:172] (0xc001b76f20) (0xc002987ea0) Stream removed, broadcasting: 1 I0822 18:59:07.080285 6 log.go:172] (0xc001b76f20) (0xc002987f40) Stream removed, broadcasting: 3 I0822 18:59:07.080294 6 log.go:172] (0xc001b76f20) (0xc002002640) Stream removed, broadcasting: 5 Aug 22 18:59:07.080: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:59:07.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4351" for this suite. • [SLOW TEST:15.209 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":959,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:59:07.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0822 18:59:18.710597 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 22 18:59:18.710: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 18:59:18.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9980" for this suite. • [SLOW TEST:11.629 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":56,"skipped":967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 18:59:18.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 22 18:59:28.862: INFO: DNS probes using dns-test-e63c205e-465f-4919-86c8-a9b3c600d142 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 22 18:59:41.559: INFO: File wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:41.563: INFO: File jessie_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:41.563: INFO: Lookups using dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 failed for: [wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local jessie_udp@dns-test-service-3.dns-968.svc.cluster.local] Aug 22 18:59:46.767: INFO: File wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:46.813: INFO: File jessie_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:46.813: INFO: Lookups using dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 failed for: [wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local jessie_udp@dns-test-service-3.dns-968.svc.cluster.local] Aug 22 18:59:51.568: INFO: File wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:51.576: INFO: File jessie_udp@dns-test-service-3.dns-968.svc.cluster.local from pod dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 22 18:59:51.576: INFO: Lookups using dns-968/dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 failed for: [wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local jessie_udp@dns-test-service-3.dns-968.svc.cluster.local] Aug 22 18:59:56.927: INFO: DNS probes using dns-test-44a36b26-003d-40f3-b3b2-6963199b72d7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-968.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-968.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 22 19:00:09.144: INFO: DNS probes using dns-test-98bef516-eed3-4f12-ae38-385498a93266 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:10.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-968" for this suite. • [SLOW TEST:51.994 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":57,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:10.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 22 19:00:12.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d" in namespace "downward-api-6079" to be "success or failure" Aug 22 19:00:12.443: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d": Phase="Pending", Reason="", readiness=false. Elapsed: 290.687469ms Aug 22 19:00:14.447: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29450329s Aug 22 19:00:17.175: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.02218503s Aug 22 19:00:19.426: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.273617595s Aug 22 19:00:21.440: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.287822042s STEP: Saw pod success Aug 22 19:00:21.440: INFO: Pod "downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d" satisfied condition "success or failure" Aug 22 19:00:21.443: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d container client-container: STEP: delete the pod Aug 22 19:00:21.590: INFO: Waiting for pod downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d to disappear Aug 22 19:00:21.624: INFO: Pod downwardapi-volume-22c536fc-be5e-4261-934c-926b0ae8531d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:21.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6079" for this suite. • [SLOW TEST:10.945 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1032,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:21.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 22 19:00:39.276: INFO: Successfully updated pod "annotationupdatee212acd2-4349-430b-81df-5f72f51259dc" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:41.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3932" for this suite. • [SLOW TEST:20.079 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1042,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:41.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f455eeac-1629-4da8-99f7-66f98e3f231b STEP: Creating a pod to test consume secrets Aug 22 19:00:42.120: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18" in namespace "projected-1323" to be "success or failure" Aug 22 19:00:42.154: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18": Phase="Pending", Reason="", readiness=false. Elapsed: 34.112379ms Aug 22 19:00:44.159: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038730983s Aug 22 19:00:46.169: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048677064s Aug 22 19:00:48.210: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089853434s Aug 22 19:00:50.372: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.25195608s STEP: Saw pod success Aug 22 19:00:50.372: INFO: Pod "pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18" satisfied condition "success or failure" Aug 22 19:00:50.375: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18 container projected-secret-volume-test: STEP: delete the pod Aug 22 19:00:50.948: INFO: Waiting for pod pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18 to disappear Aug 22 19:00:50.960: INFO: Pod pod-projected-secrets-f793a61b-26ef-4c8a-93a7-3fec76077e18 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1323" for this suite. • [SLOW TEST:9.227 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:50.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0459de83-8a5c-4f29-880e-cddd9d2ed520 STEP: Creating a pod to test consume configMaps Aug 22 19:00:51.085: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7" in namespace "projected-651" to be "success or failure" Aug 22 19:00:51.098: INFO: Pod "pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.060203ms Aug 22 19:00:53.102: INFO: Pod "pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017389362s Aug 22 19:00:55.106: INFO: Pod "pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021643652s STEP: Saw pod success Aug 22 19:00:55.106: INFO: Pod "pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7" satisfied condition "success or failure" Aug 22 19:00:55.110: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7 container projected-configmap-volume-test: STEP: delete the pod Aug 22 19:00:55.259: INFO: Waiting for pod pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7 to disappear Aug 22 19:00:55.277: INFO: Pod pod-projected-configmaps-d68141c6-2527-4f8e-9a46-1bb28c4b78b7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:55.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-651" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1078,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:55.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 22 19:00:55.418: INFO: Waiting up to 5m0s for pod "downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76" in namespace "projected-4369" to be "success or failure" Aug 22 19:00:55.421: INFO: Pod "downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868917ms Aug 22 19:00:57.533: INFO: Pod "downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115118639s Aug 22 19:00:59.537: INFO: Pod "downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118864604s STEP: Saw pod success Aug 22 19:00:59.537: INFO: Pod "downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76" satisfied condition "success or failure" Aug 22 19:00:59.540: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76 container client-container: STEP: delete the pod Aug 22 19:00:59.607: INFO: Waiting for pod downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76 to disappear Aug 22 19:00:59.619: INFO: Pod downwardapi-volume-379e4f22-0dcf-468d-b630-984afe881d76 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:00:59.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4369" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:00:59.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 22 19:01:08.268: INFO: Successfully updated pod "pod-update-activedeadlineseconds-53fe8492-6e7f-4de6-9f55-4f34ea50d2f6" Aug 22 19:01:08.268: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-53fe8492-6e7f-4de6-9f55-4f34ea50d2f6" in namespace "pods-2654" to be "terminated due to deadline exceeded" Aug 22 19:01:08.301: INFO: Pod "pod-update-activedeadlineseconds-53fe8492-6e7f-4de6-9f55-4f34ea50d2f6": Phase="Running", Reason="", readiness=true. Elapsed: 33.100758ms Aug 22 19:01:10.305: INFO: Pod "pod-update-activedeadlineseconds-53fe8492-6e7f-4de6-9f55-4f34ea50d2f6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.037254416s Aug 22 19:01:10.305: INFO: Pod "pod-update-activedeadlineseconds-53fe8492-6e7f-4de6-9f55-4f34ea50d2f6" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:01:10.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2654" for this suite. • [SLOW TEST:10.686 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1131,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:01:10.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6538 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Aug 22 19:01:10.490: INFO: Found 0 stateful pods, waiting for 3 Aug 22 19:01:20.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:01:20.863: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:01:20.863: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 22 19:01:30.518: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:01:30.518: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:01:30.518: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 22 19:01:30.626: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 22 19:01:40.725: INFO: Updating stateful set ss2 Aug 22 19:01:40.754: INFO: Waiting for Pod statefulset-6538/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 22 19:01:50.759: INFO: Waiting for Pod statefulset-6538/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 22 19:02:01.666: INFO: Found 2 stateful pods, waiting for 3 Aug 22 19:02:11.731: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:02:11.731: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 22 19:02:11.731: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 22 19:02:11.850: INFO: Updating stateful set ss2 Aug 22 19:02:12.101: INFO: Waiting for Pod statefulset-6538/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 22 19:02:22.263: INFO: Updating stateful set ss2 Aug 22 19:02:22.385: INFO: Waiting for StatefulSet statefulset-6538/ss2 to complete update Aug 22 19:02:22.385: INFO: Waiting for Pod statefulset-6538/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 22 19:02:32.392: INFO: Waiting for StatefulSet statefulset-6538/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 22 19:02:42.391: INFO: Deleting all statefulset in ns statefulset-6538 Aug 22 19:02:42.393: INFO: Scaling statefulset ss2 to 0 Aug 22 19:03:12.410: INFO: Waiting for statefulset status.replicas updated to 0 Aug 22 19:03:12.413: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 22 19:03:12.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6538" for this suite. • [SLOW TEST:122.127 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":64,"skipped":1136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 22 19:03:12.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 22 19:03:12.523: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 22 19:03:12.680: INFO: Waiting up to 5m0s for pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916" in namespace "emptydir-3008" to be "success or failure"
Aug 22 19:03:12.731: INFO: Pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916": Phase="Pending", Reason="", readiness=false. Elapsed: 51.031578ms
Aug 22 19:03:14.778: INFO: Pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097551278s
Aug 22 19:03:16.781: INFO: Pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100592206s
Aug 22 19:03:18.864: INFO: Pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184023972s
STEP: Saw pod success
Aug 22 19:03:18.864: INFO: Pod "pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916" satisfied condition "success or failure"
Aug 22 19:03:18.898: INFO: Trying to get logs from node jerma-worker pod pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916 container test-container: 
STEP: delete the pod
Aug 22 19:03:18.991: INFO: Waiting for pod pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916 to disappear
Aug 22 19:03:18.999: INFO: Pod pod-7e0bac4c-69bc-4c75-9ef1-b9374b6e2916 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:03:18.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3008" for this suite.

• [SLOW TEST:6.457 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1223,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:03:19.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2074
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 22 19:03:19.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 22 19:03:43.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.80:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2074 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:03:43.635: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:03:43.665553       6 log.go:172] (0xc001032790) (0xc0028d8a00) Create stream
I0822 19:03:43.665587       6 log.go:172] (0xc001032790) (0xc0028d8a00) Stream added, broadcasting: 1
I0822 19:03:43.667820       6 log.go:172] (0xc001032790) Reply frame received for 1
I0822 19:03:43.667874       6 log.go:172] (0xc001032790) (0xc0028d8aa0) Create stream
I0822 19:03:43.667890       6 log.go:172] (0xc001032790) (0xc0028d8aa0) Stream added, broadcasting: 3
I0822 19:03:43.668508       6 log.go:172] (0xc001032790) Reply frame received for 3
I0822 19:03:43.668535       6 log.go:172] (0xc001032790) (0xc0028d8be0) Create stream
I0822 19:03:43.668542       6 log.go:172] (0xc001032790) (0xc0028d8be0) Stream added, broadcasting: 5
I0822 19:03:43.669204       6 log.go:172] (0xc001032790) Reply frame received for 5
I0822 19:03:43.733979       6 log.go:172] (0xc001032790) Data frame received for 3
I0822 19:03:43.734008       6 log.go:172] (0xc0028d8aa0) (3) Data frame handling
I0822 19:03:43.734023       6 log.go:172] (0xc0028d8aa0) (3) Data frame sent
I0822 19:03:43.734030       6 log.go:172] (0xc001032790) Data frame received for 3
I0822 19:03:43.734041       6 log.go:172] (0xc0028d8aa0) (3) Data frame handling
I0822 19:03:43.734398       6 log.go:172] (0xc001032790) Data frame received for 5
I0822 19:03:43.734465       6 log.go:172] (0xc0028d8be0) (5) Data frame handling
I0822 19:03:43.735680       6 log.go:172] (0xc001032790) Data frame received for 1
I0822 19:03:43.735700       6 log.go:172] (0xc0028d8a00) (1) Data frame handling
I0822 19:03:43.735721       6 log.go:172] (0xc0028d8a00) (1) Data frame sent
I0822 19:03:43.735743       6 log.go:172] (0xc001032790) (0xc0028d8a00) Stream removed, broadcasting: 1
I0822 19:03:43.735770       6 log.go:172] (0xc001032790) Go away received
I0822 19:03:43.735851       6 log.go:172] (0xc001032790) (0xc0028d8a00) Stream removed, broadcasting: 1
I0822 19:03:43.735865       6 log.go:172] (0xc001032790) (0xc0028d8aa0) Stream removed, broadcasting: 3
I0822 19:03:43.735873       6 log.go:172] (0xc001032790) (0xc0028d8be0) Stream removed, broadcasting: 5
Aug 22 19:03:43.735: INFO: Found all expected endpoints: [netserver-0]
Aug 22 19:03:43.822: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.70:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2074 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:03:43.822: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:03:43.850251       6 log.go:172] (0xc0064e0210) (0xc001f7f220) Create stream
I0822 19:03:43.850279       6 log.go:172] (0xc0064e0210) (0xc001f7f220) Stream added, broadcasting: 1
I0822 19:03:43.852036       6 log.go:172] (0xc0064e0210) Reply frame received for 1
I0822 19:03:43.852061       6 log.go:172] (0xc0064e0210) (0xc002fdde00) Create stream
I0822 19:03:43.852069       6 log.go:172] (0xc0064e0210) (0xc002fdde00) Stream added, broadcasting: 3
I0822 19:03:43.853160       6 log.go:172] (0xc0064e0210) Reply frame received for 3
I0822 19:03:43.853191       6 log.go:172] (0xc0064e0210) (0xc0028d8c80) Create stream
I0822 19:03:43.853201       6 log.go:172] (0xc0064e0210) (0xc0028d8c80) Stream added, broadcasting: 5
I0822 19:03:43.854007       6 log.go:172] (0xc0064e0210) Reply frame received for 5
I0822 19:03:43.922531       6 log.go:172] (0xc0064e0210) Data frame received for 3
I0822 19:03:43.922567       6 log.go:172] (0xc002fdde00) (3) Data frame handling
I0822 19:03:43.922590       6 log.go:172] (0xc002fdde00) (3) Data frame sent
I0822 19:03:43.922600       6 log.go:172] (0xc0064e0210) Data frame received for 3
I0822 19:03:43.922613       6 log.go:172] (0xc002fdde00) (3) Data frame handling
I0822 19:03:43.922778       6 log.go:172] (0xc0064e0210) Data frame received for 5
I0822 19:03:43.922812       6 log.go:172] (0xc0028d8c80) (5) Data frame handling
I0822 19:03:43.924267       6 log.go:172] (0xc0064e0210) Data frame received for 1
I0822 19:03:43.924289       6 log.go:172] (0xc001f7f220) (1) Data frame handling
I0822 19:03:43.924304       6 log.go:172] (0xc001f7f220) (1) Data frame sent
I0822 19:03:43.924329       6 log.go:172] (0xc0064e0210) (0xc001f7f220) Stream removed, broadcasting: 1
I0822 19:03:43.924349       6 log.go:172] (0xc0064e0210) Go away received
I0822 19:03:43.924462       6 log.go:172] (0xc0064e0210) (0xc001f7f220) Stream removed, broadcasting: 1
I0822 19:03:43.924479       6 log.go:172] (0xc0064e0210) (0xc002fdde00) Stream removed, broadcasting: 3
I0822 19:03:43.924487       6 log.go:172] (0xc0064e0210) (0xc0028d8c80) Stream removed, broadcasting: 5
Aug 22 19:03:43.924: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:03:43.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2074" for this suite.

• [SLOW TEST:24.882 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1230,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:03:43.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 22 19:03:44.239: INFO: Waiting up to 5m0s for pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4" in namespace "emptydir-6950" to be "success or failure"
Aug 22 19:03:44.252: INFO: Pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.771671ms
Aug 22 19:03:46.373: INFO: Pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134465733s
Aug 22 19:03:48.377: INFO: Pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.138743413s
Aug 22 19:03:50.469: INFO: Pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230389198s
STEP: Saw pod success
Aug 22 19:03:50.469: INFO: Pod "pod-222b6a2f-fd87-428a-9c96-0c403b3148a4" satisfied condition "success or failure"
Aug 22 19:03:50.479: INFO: Trying to get logs from node jerma-worker2 pod pod-222b6a2f-fd87-428a-9c96-0c403b3148a4 container test-container: 
STEP: delete the pod
Aug 22 19:03:50.882: INFO: Waiting for pod pod-222b6a2f-fd87-428a-9c96-0c403b3148a4 to disappear
Aug 22 19:03:50.904: INFO: Pod pod-222b6a2f-fd87-428a-9c96-0c403b3148a4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:03:50.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6950" for this suite.

• [SLOW TEST:6.990 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:03:50.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:03:51.982: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 22 19:03:56.986: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 22 19:03:58.992: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 22 19:04:00.995: INFO: Creating deployment "test-rollover-deployment"
Aug 22 19:04:01.015: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 22 19:04:03.063: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 22 19:04:03.070: INFO: Ensure that both replica sets have 1 created replica
Aug 22 19:04:03.239: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 22 19:04:03.244: INFO: Updating deployment test-rollover-deployment
Aug 22 19:04:03.244: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 22 19:04:05.684: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 22 19:04:05.735: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 22 19:04:05.742: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:05.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719843, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:07.748: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:07.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719843, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:09.749: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:09.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719849, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:11.750: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:11.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719849, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:13.748: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:13.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719849, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:15.851: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:15.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719849, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:17.748: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 19:04:17.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719849, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:20.296: INFO: 
Aug 22 19:04:20.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719841, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:04:21.752: INFO: 
Aug 22 19:04:21.752: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 19:04:21.758: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-8868 /apis/apps/v1/namespaces/deployment-8868/deployments/test-rollover-deployment fe4bf4ac-4407-4174-8505-18494e7a202a 2542166 2 2020-08-22 19:04:00 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001b78428  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-22 19:04:01 +0000 UTC,LastTransitionTime:2020-08-22 19:04:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-22 19:04:20 +0000 UTC,LastTransitionTime:2020-08-22 19:04:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 22 19:04:21.760: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-8868 /apis/apps/v1/namespaces/deployment-8868/replicasets/test-rollover-deployment-574d6dfbff 209a5cde-fc8f-4736-916c-e606df714dc3 2542155 2 2020-08-22 19:04:03 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fe4bf4ac-4407-4174-8505-18494e7a202a 0xc001d8cb77 0xc001d8cb78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d8cc18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:04:21.760: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 22 19:04:21.761: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-8868 /apis/apps/v1/namespaces/deployment-8868/replicasets/test-rollover-controller 61bd2e84-7eb8-43a1-8312-71a0e4b9194e 2542165 2 2020-08-22 19:03:51 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fe4bf4ac-4407-4174-8505-18494e7a202a 0xc001d8c9bf 0xc001d8c9d0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001d8ca38  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:04:21.761: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-8868 /apis/apps/v1/namespaces/deployment-8868/replicasets/test-rollover-deployment-f6c94f66c 35590f9e-80ca-4bb3-8dc3-1ed6d9b305b9 2542101 2 2020-08-22 19:04:01 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fe4bf4ac-4407-4174-8505-18494e7a202a 0xc001d8cc90 0xc001d8cc91}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d8cd08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:04:21.763: INFO: Pod "test-rollover-deployment-574d6dfbff-qwmkh" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-qwmkh test-rollover-deployment-574d6dfbff- deployment-8868 /api/v1/namespaces/deployment-8868/pods/test-rollover-deployment-574d6dfbff-qwmkh 12010980-affe-483a-83f7-70bd91be268b 2542125 0 2020-08-22 19:04:03 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 209a5cde-fc8f-4736-916c-e606df714dc3 0xc001b78cd7 0xc001b78cd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mpqhq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mpqhq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mpqhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:04:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.73,StartTime:2020-08-22 19:04:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 19:04:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://10a5dc095abcb73fd6318ca57c4e83a96fb2f15eb6711717e593e90de504c09c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:04:21.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8868" for this suite.

• [SLOW TEST:30.848 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":69,"skipped":1265,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:04:21.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 22 19:04:37.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 22 19:04:37.253: INFO: Pod pod-with-poststart-http-hook still exists
Aug 22 19:04:39.253: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 22 19:04:39.258: INFO: Pod pod-with-poststart-http-hook still exists
Aug 22 19:04:41.253: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 22 19:04:41.257: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:04:41.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6664" for this suite.

• [SLOW TEST:19.494 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:04:41.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 22 19:04:41.391: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7773 /api/v1/namespaces/watch-7773/configmaps/e2e-watch-test-watch-closed 46244932-b38e-4804-88bf-b6b0224ab3a9 2542288 0 2020-08-22 19:04:41 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 22 19:04:41.391: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7773 /api/v1/namespaces/watch-7773/configmaps/e2e-watch-test-watch-closed 46244932-b38e-4804-88bf-b6b0224ab3a9 2542289 0 2020-08-22 19:04:41 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 22 19:04:41.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7773 /api/v1/namespaces/watch-7773/configmaps/e2e-watch-test-watch-closed 46244932-b38e-4804-88bf-b6b0224ab3a9 2542290 0 2020-08-22 19:04:41 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 22 19:04:41.471: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7773 /api/v1/namespaces/watch-7773/configmaps/e2e-watch-test-watch-closed 46244932-b38e-4804-88bf-b6b0224ab3a9 2542291 0 2020-08-22 19:04:41 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:04:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7773" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":71,"skipped":1334,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:04:41.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 22 19:04:41.584: INFO: namespace kubectl-8537
Aug 22 19:04:41.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8537'
Aug 22 19:04:58.923: INFO: stderr: ""
Aug 22 19:04:58.923: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 22 19:04:59.928: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:04:59.928: INFO: Found 0 / 1
Aug 22 19:05:01.413: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:01.413: INFO: Found 0 / 1
Aug 22 19:05:02.050: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:02.050: INFO: Found 0 / 1
Aug 22 19:05:02.926: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:02.926: INFO: Found 0 / 1
Aug 22 19:05:04.195: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:04.195: INFO: Found 0 / 1
Aug 22 19:05:04.926: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:04.926: INFO: Found 0 / 1
Aug 22 19:05:05.998: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:05.998: INFO: Found 1 / 1
Aug 22 19:05:05.998: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 22 19:05:06.001: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:05:06.001: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 22 19:05:06.001: INFO: wait on agnhost-master startup in kubectl-8537 
Aug 22 19:05:06.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-kgjw4 agnhost-master --namespace=kubectl-8537'
Aug 22 19:05:06.405: INFO: stderr: ""
Aug 22 19:05:06.405: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 22 19:05:06.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8537'
Aug 22 19:05:07.602: INFO: stderr: ""
Aug 22 19:05:07.603: INFO: stdout: "service/rm2 exposed\n"
Aug 22 19:05:08.161: INFO: Service rm2 in namespace kubectl-8537 found.
STEP: exposing service
Aug 22 19:05:10.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8537'
Aug 22 19:05:10.836: INFO: stderr: ""
Aug 22 19:05:10.836: INFO: stdout: "service/rm3 exposed\n"
Aug 22 19:05:11.458: INFO: Service rm3 in namespace kubectl-8537 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:05:13.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8537" for this suite.

• [SLOW TEST:32.109 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":72,"skipped":1339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:05:13.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:05:20.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3151" for this suite.

• [SLOW TEST:6.790 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1411,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:05:20.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:05:37.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1999" for this suite.

• [SLOW TEST:16.704 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":74,"skipped":1452,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:05:37.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:05:37.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96" in namespace "projected-9657" to be "success or failure"
Aug 22 19:05:37.638: INFO: Pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96": Phase="Pending", Reason="", readiness=false. Elapsed: 146.746837ms
Aug 22 19:05:39.847: INFO: Pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356114702s
Aug 22 19:05:41.851: INFO: Pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359993362s
Aug 22 19:05:43.855: INFO: Pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.364003227s
STEP: Saw pod success
Aug 22 19:05:43.855: INFO: Pod "downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96" satisfied condition "success or failure"
Aug 22 19:05:43.858: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96 container client-container: 
STEP: delete the pod
Aug 22 19:05:44.003: INFO: Waiting for pod downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96 to disappear
Aug 22 19:05:44.014: INFO: Pod downwardapi-volume-1a4c0273-7841-40de-ae99-ca681a55ee96 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:05:44.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9657" for this suite.

• [SLOW TEST:6.937 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1453,"failed":0}
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:05:44.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:05:44.145: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 22 19:05:44.200: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 22 19:05:49.204: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 22 19:05:49.204: INFO: Creating deployment "test-rolling-update-deployment"
Aug 22 19:05:49.423: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 22 19:05:49.661: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set
Aug 22 19:05:51.668: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 22 19:05:51.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:05:53.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719949, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:05:55.673: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 19:05:55.680: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3758 /apis/apps/v1/namespaces/deployment-3758/deployments/test-rolling-update-deployment 06d94261-63eb-4861-a603-8e39b24b1671 2542663 1 2020-08-22 19:05:49 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00312b588  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-22 19:05:49 +0000 UTC,LastTransitionTime:2020-08-22 19:05:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-22 19:05:55 +0000 UTC,LastTransitionTime:2020-08-22 19:05:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 22 19:05:55.682: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-3758 /apis/apps/v1/namespaces/deployment-3758/replicasets/test-rolling-update-deployment-67cf4f6444 b3eeadc1-524f-4dd7-86e5-44518ae00213 2542652 1 2020-08-22 19:05:49 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 06d94261-63eb-4861-a603-8e39b24b1671 0xc0028ff7a7 0xc0028ff7a8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ff838  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:05:55.683: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 22 19:05:55.683: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3758 /apis/apps/v1/namespaces/deployment-3758/replicasets/test-rolling-update-controller edbf4725-2e2d-45b1-ac37-c866d07ee0c4 2542662 2 2020-08-22 19:05:44 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 06d94261-63eb-4861-a603-8e39b24b1671 0xc0028ff6d7 0xc0028ff6d8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028ff738  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:05:55.686: INFO: Pod "test-rolling-update-deployment-67cf4f6444-rvtx6" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-rvtx6 test-rolling-update-deployment-67cf4f6444- deployment-3758 /api/v1/namespaces/deployment-3758/pods/test-rolling-update-deployment-67cf4f6444-rvtx6 6a5afdb5-4022-446b-826b-2604addcd921 2542651 0 2020-08-22 19:05:49 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 b3eeadc1-524f-4dd7-86e5-44518ae00213 0xc0006a9237 0xc0006a9238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7pqmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7pqmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7pqmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:05:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:05:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:05:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:05:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.85,StartTime:2020-08-22 19:05:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 19:05:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://820ecc0e3000ffdd99fc7972e340d81ed86ed3d39dc717ec0855a312918bf40b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:05:55.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3758" for this suite.

• [SLOW TEST:11.672 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":76,"skipped":1453,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:05:55.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f688975c-3afc-4a2b-8249-477e8a9fd2be
STEP: Creating a pod to test consume secrets
Aug 22 19:05:56.428: INFO: Waiting up to 5m0s for pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f" in namespace "secrets-6937" to be "success or failure"
Aug 22 19:05:56.430: INFO: Pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690537ms
Aug 22 19:05:58.562: INFO: Pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134373145s
Aug 22 19:06:00.566: INFO: Pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138046171s
Aug 22 19:06:02.590: INFO: Pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16199635s
STEP: Saw pod success
Aug 22 19:06:02.590: INFO: Pod "pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f" satisfied condition "success or failure"
Aug 22 19:06:02.592: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f container secret-env-test: 
STEP: delete the pod
Aug 22 19:06:03.008: INFO: Waiting for pod pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f to disappear
Aug 22 19:06:03.086: INFO: Pod pod-secrets-cf419a57-3b0e-45a2-bd32-1fe5a5a5170f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:03.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6937" for this suite.

• [SLOW TEST:7.574 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1473,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:03.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:10.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3740" for this suite.

• [SLOW TEST:6.886 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1482,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:10.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:06:10.578: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90" in namespace "security-context-test-1316" to be "success or failure"
Aug 22 19:06:11.135: INFO: Pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90": Phase="Pending", Reason="", readiness=false. Elapsed: 557.008972ms
Aug 22 19:06:13.139: INFO: Pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561108681s
Aug 22 19:06:15.356: INFO: Pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.777958137s
Aug 22 19:06:17.361: INFO: Pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.78273841s
Aug 22 19:06:17.361: INFO: Pod "busybox-readonly-false-24cac578-9274-44ac-8013-f73481d66d90" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:17.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1316" for this suite.

• [SLOW TEST:7.225 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:17.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:06:18.084: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 22 19:06:20.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:06:22.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719978, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:06:25.674: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:06:25.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:27.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8293" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:10.134 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":80,"skipped":1530,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:27.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 19:06:27.774: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 19:06:27.813: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 19:06:27.871: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 19:06:27.931: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.931: INFO: 	Container app ready: true, restart count 0
Aug 22 19:06:27.931: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-4rd2j from crd-webhook-8293 started at 2020-08-22 19:06:18 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.931: INFO: 	Container sample-crd-conversion-webhook ready: true, restart count 0
Aug 22 19:06:27.931: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.931: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 19:06:27.931: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.931: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:06:27.931: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 19:06:27.940: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.940: INFO: 	Container app ready: true, restart count 0
Aug 22 19:06:27.940: INFO: busybox-host-aliasesef2d757d-ea6f-44b5-9164-6cf122e275cd from kubelet-test-3740 started at 2020-08-22 19:06:04 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.940: INFO: 	Container busybox-host-aliasesef2d757d-ea6f-44b5-9164-6cf122e275cd ready: true, restart count 0
Aug 22 19:06:27.940: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.940: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:06:27.940: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:06:27.940: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162dac927d43676c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:28.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8491" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":81,"skipped":1538,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:28.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-9002/configmap-test-2f551163-8d4e-42b5-bb6a-a65c1a779cd1
STEP: Creating a pod to test consume configMaps
Aug 22 19:06:29.181: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c" in namespace "configmap-9002" to be "success or failure"
Aug 22 19:06:29.297: INFO: Pod "pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c": Phase="Pending", Reason="", readiness=false. Elapsed: 116.032384ms
Aug 22 19:06:31.301: INFO: Pod "pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120455064s
Aug 22 19:06:33.664: INFO: Pod "pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.482832308s
STEP: Saw pod success
Aug 22 19:06:33.664: INFO: Pod "pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c" satisfied condition "success or failure"
Aug 22 19:06:33.667: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c container env-test: 
STEP: delete the pod
Aug 22 19:06:33.970: INFO: Waiting for pod pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c to disappear
Aug 22 19:06:34.236: INFO: Pod pod-configmaps-8ef36840-1d82-4589-aef7-baa8d023b17c no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:34.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9002" for this suite.

• [SLOW TEST:5.277 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1548,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:34.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:06:34.421: INFO: Creating deployment "test-recreate-deployment"
Aug 22 19:06:34.427: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 22 19:06:34.484: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 22 19:06:34.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"test-recreate-deployment-799c574856\""}}, CollisionCount:(*int32)(nil)}
Aug 22 19:06:36.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:06:38.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:06:40.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733719994, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:06:42.512: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 22 19:06:42.518: INFO: Updating deployment test-recreate-deployment
Aug 22 19:06:42.518: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 19:06:44.202: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-5233 /apis/apps/v1/namespaces/deployment-5233/deployments/test-recreate-deployment 0caf0b7e-8c7a-4cd0-b45b-6530ff3bef92 2543037 2 2020-08-22 19:06:34 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049823c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-22 19:06:43 +0000 UTC,LastTransitionTime:2020-08-22 19:06:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-22 19:06:43 +0000 UTC,LastTransitionTime:2020-08-22 19:06:34 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 22 19:06:44.207: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-5233 /apis/apps/v1/namespaces/deployment-5233/replicasets/test-recreate-deployment-5f94c574ff b7e7de0d-f6cb-4f3e-bf13-b4e83ab85d3f 2543035 1 2020-08-22 19:06:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0caf0b7e-8c7a-4cd0-b45b-6530ff3bef92 0xc004983387 0xc004983388}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049838f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:06:44.207: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 22 19:06:44.207: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-5233 /apis/apps/v1/namespaces/deployment-5233/replicasets/test-recreate-deployment-799c574856 2cfc84e3-3918-4794-8404-b526d4a15f78 2543025 2 2020-08-22 19:06:34 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0caf0b7e-8c7a-4cd0-b45b-6530ff3bef92 0xc004983ad7 0xc004983ad8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004983c98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:06:44.339: INFO: Pod "test-recreate-deployment-5f94c574ff-r7djd" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-r7djd test-recreate-deployment-5f94c574ff- deployment-5233 /api/v1/namespaces/deployment-5233/pods/test-recreate-deployment-5f94c574ff-r7djd 7748bc7c-1510-4e11-8d20-42cfe0d50001 2543039 0 2020-08-22 19:06:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff b7e7de0d-f6cb-4f3e-bf13-b4e83ab85d3f 0xc0028075d7 0xc0028075d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7jfd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7jfd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7jfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:06:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:06:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:06:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:06:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 19:06:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:44.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5233" for this suite.

• [SLOW TEST:10.145 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":83,"skipped":1549,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:44.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:06:44.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 22 19:06:47.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8923 create -f -'
Aug 22 19:06:54.077: INFO: stderr: ""
Aug 22 19:06:54.077: INFO: stdout: "e2e-test-crd-publish-openapi-9697-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 22 19:06:54.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8923 delete e2e-test-crd-publish-openapi-9697-crds test-cr'
Aug 22 19:06:54.195: INFO: stderr: ""
Aug 22 19:06:54.195: INFO: stdout: "e2e-test-crd-publish-openapi-9697-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 22 19:06:54.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8923 apply -f -'
Aug 22 19:06:54.521: INFO: stderr: ""
Aug 22 19:06:54.521: INFO: stdout: "e2e-test-crd-publish-openapi-9697-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 22 19:06:54.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8923 delete e2e-test-crd-publish-openapi-9697-crds test-cr'
Aug 22 19:06:54.666: INFO: stderr: ""
Aug 22 19:06:54.666: INFO: stdout: "e2e-test-crd-publish-openapi-9697-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 22 19:06:54.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9697-crds'
Aug 22 19:06:55.650: INFO: stderr: ""
Aug 22 19:06:55.650: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9697-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:06:59.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8923" for this suite.

• [SLOW TEST:14.741 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":84,"skipped":1558,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:06:59.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:06:59.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9" in namespace "projected-1988" to be "success or failure"
Aug 22 19:06:59.535: INFO: Pod "downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023403ms
Aug 22 19:07:01.539: INFO: Pod "downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014187537s
Aug 22 19:07:03.543: INFO: Pod "downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017661036s
STEP: Saw pod success
Aug 22 19:07:03.543: INFO: Pod "downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9" satisfied condition "success or failure"
Aug 22 19:07:03.546: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9 container client-container: 
STEP: delete the pod
Aug 22 19:07:03.729: INFO: Waiting for pod downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9 to disappear
Aug 22 19:07:03.739: INFO: Pod downwardapi-volume-e1a51e70-8c44-4a7c-b85c-95b4d7f14ab9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:07:03.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1988" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1577,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:07:03.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:07:03.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6008'
Aug 22 19:07:04.130: INFO: stderr: ""
Aug 22 19:07:04.130: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 22 19:07:04.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6008'
Aug 22 19:07:04.710: INFO: stderr: ""
Aug 22 19:07:04.710: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 22 19:07:05.848: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:07:05.848: INFO: Found 0 / 1
Aug 22 19:07:06.717: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:07:06.718: INFO: Found 0 / 1
Aug 22 19:07:07.914: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:07:07.914: INFO: Found 0 / 1
Aug 22 19:07:08.716: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:07:08.716: INFO: Found 1 / 1
Aug 22 19:07:08.716: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 22 19:07:08.718: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 19:07:08.718: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 22 19:07:08.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-x47v5 --namespace=kubectl-6008'
Aug 22 19:07:08.824: INFO: stderr: ""
Aug 22 19:07:08.824: INFO: stdout: "Name:         agnhost-master-x47v5\nNamespace:    kubectl-6008\nPriority:     0\nNode:         jerma-worker/172.18.0.6\nStart Time:   Sat, 22 Aug 2020 19:07:04 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.92\nIPs:\n  IP:           10.244.2.92\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://0b2d45e6fc677579955ab70faeaf32baa9ba53b06bdedda0ad74f64eb60531ea\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 22 Aug 2020 19:07:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z7bzc (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-z7bzc:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-z7bzc\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  4s    default-scheduler      Successfully assigned kubectl-6008/agnhost-master-x47v5 to jerma-worker\n  Normal  Pulled     3s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, jerma-worker  Started container agnhost-master\n"
Aug 22 19:07:08.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6008'
Aug 22 19:07:09.508: INFO: stderr: ""
Aug 22 19:07:09.508: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-6008\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-x47v5\n"
Aug 22 19:07:09.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6008'
Aug 22 19:07:09.874: INFO: stderr: ""
Aug 22 19:07:09.874: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-6008\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.102.188.71\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.92:6379\nSession Affinity:  None\nEvents:            \n"
Aug 22 19:07:09.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 22 19:07:10.076: INFO: stderr: ""
Aug 22 19:07:10.076: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Sat, 22 Aug 2020 19:07:07 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 22 Aug 2020 19:03:26 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 22 Aug 2020 19:03:26 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 22 Aug 2020 19:03:26 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 22 Aug 2020 19:03:26 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     7d9h\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     7d9h\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      7d9h\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 22 19:07:10.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6008'
Aug 22 19:07:10.274: INFO: stderr: ""
Aug 22 19:07:10.274: INFO: stdout: "Name:         kubectl-6008\nLabels:       e2e-framework=kubectl\n              e2e-run=ee002a9f-d561-4108-9d92-5c0834ec0275\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:07:10.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6008" for this suite.

• [SLOW TEST:6.535 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":86,"skipped":1601,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:07:10.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 22 19:07:11.240: INFO: Waiting up to 5m0s for pod "var-expansion-8e9db524-1834-41de-a179-23e25ea779dd" in namespace "var-expansion-3127" to be "success or failure"
Aug 22 19:07:11.282: INFO: Pod "var-expansion-8e9db524-1834-41de-a179-23e25ea779dd": Phase="Pending", Reason="", readiness=false. Elapsed: 42.026994ms
Aug 22 19:07:13.315: INFO: Pod "var-expansion-8e9db524-1834-41de-a179-23e25ea779dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074798572s
Aug 22 19:07:15.317: INFO: Pod "var-expansion-8e9db524-1834-41de-a179-23e25ea779dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077680606s
STEP: Saw pod success
Aug 22 19:07:15.318: INFO: Pod "var-expansion-8e9db524-1834-41de-a179-23e25ea779dd" satisfied condition "success or failure"
Aug 22 19:07:15.320: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8e9db524-1834-41de-a179-23e25ea779dd container dapi-container: 
STEP: delete the pod
Aug 22 19:07:15.419: INFO: Waiting for pod var-expansion-8e9db524-1834-41de-a179-23e25ea779dd to disappear
Aug 22 19:07:15.494: INFO: Pod var-expansion-8e9db524-1834-41de-a179-23e25ea779dd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:07:15.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3127" for this suite.

• [SLOW TEST:5.223 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1642,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:07:15.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:07:16.997: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 22 19:07:19.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720037, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:07:21.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720037, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720036, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:07:24.042: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:07:24.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:07:25.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-876" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:10.758 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":88,"skipped":1690,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:07:26.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-33dda671-b807-44c2-bdf3-040f36a8a985
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-33dda671-b807-44c2-bdf3-040f36a8a985
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:08:56.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4310" for this suite.

• [SLOW TEST:89.999 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1694,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:08:56.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 22 19:08:56.869: INFO: Waiting up to 5m0s for pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03" in namespace "containers-6566" to be "success or failure"
Aug 22 19:08:56.875: INFO: Pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03": Phase="Pending", Reason="", readiness=false. Elapsed: 5.645522ms
Aug 22 19:08:58.879: INFO: Pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010004944s
Aug 22 19:09:01.299: INFO: Pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03": Phase="Running", Reason="", readiness=true. Elapsed: 4.429609234s
Aug 22 19:09:03.520: INFO: Pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.650961169s
STEP: Saw pod success
Aug 22 19:09:03.520: INFO: Pod "client-containers-d74e627c-d954-493f-8dab-0e7076b24c03" satisfied condition "success or failure"
Aug 22 19:09:03.540: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d74e627c-d954-493f-8dab-0e7076b24c03 container test-container: 
STEP: delete the pod
Aug 22 19:09:04.128: INFO: Waiting for pod client-containers-d74e627c-d954-493f-8dab-0e7076b24c03 to disappear
Aug 22 19:09:04.279: INFO: Pod client-containers-d74e627c-d954-493f-8dab-0e7076b24c03 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:09:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6566" for this suite.

• [SLOW TEST:8.025 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1714,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:09:04.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ac124afe-65e2-4c35-aa06-8bdfbdabe940
STEP: Creating a pod to test consume configMaps
Aug 22 19:09:05.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb" in namespace "configmap-5622" to be "success or failure"
Aug 22 19:09:05.241: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701232ms
Aug 22 19:09:07.328: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094125179s
Aug 22 19:09:09.331: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097473041s
Aug 22 19:09:11.390: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156520359s
Aug 22 19:09:13.483: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.248944876s
STEP: Saw pod success
Aug 22 19:09:13.483: INFO: Pod "pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb" satisfied condition "success or failure"
Aug 22 19:09:13.485: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb container configmap-volume-test: 
STEP: delete the pod
Aug 22 19:09:13.534: INFO: Waiting for pod pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb to disappear
Aug 22 19:09:13.574: INFO: Pod pod-configmaps-e7a191c6-f1a5-4370-b6a6-c6eb6fdd47eb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:09:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5622" for this suite.

• [SLOW TEST:9.293 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1731,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:09:13.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 22 19:09:13.783: INFO: Waiting up to 5m0s for pod "pod-d02b78d9-addf-4f49-b116-562594c52af4" in namespace "emptydir-5746" to be "success or failure"
Aug 22 19:09:13.785: INFO: Pod "pod-d02b78d9-addf-4f49-b116-562594c52af4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281132ms
Aug 22 19:09:16.019: INFO: Pod "pod-d02b78d9-addf-4f49-b116-562594c52af4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236714861s
Aug 22 19:09:18.136: INFO: Pod "pod-d02b78d9-addf-4f49-b116-562594c52af4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353804121s
Aug 22 19:09:20.276: INFO: Pod "pod-d02b78d9-addf-4f49-b116-562594c52af4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.493487208s
STEP: Saw pod success
Aug 22 19:09:20.276: INFO: Pod "pod-d02b78d9-addf-4f49-b116-562594c52af4" satisfied condition "success or failure"
Aug 22 19:09:20.279: INFO: Trying to get logs from node jerma-worker pod pod-d02b78d9-addf-4f49-b116-562594c52af4 container test-container: 
STEP: delete the pod
Aug 22 19:09:20.467: INFO: Waiting for pod pod-d02b78d9-addf-4f49-b116-562594c52af4 to disappear
Aug 22 19:09:20.675: INFO: Pod pod-d02b78d9-addf-4f49-b116-562594c52af4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:09:20.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5746" for this suite.

• [SLOW TEST:7.103 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1738,"failed":0}
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:09:20.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:09:21.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703" in namespace "projected-7650" to be "success or failure"
Aug 22 19:09:21.110: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 41.623115ms
Aug 22 19:09:23.114: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045838948s
Aug 22 19:09:25.118: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049286077s
Aug 22 19:09:27.408: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339557136s
Aug 22 19:09:29.574: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505316468s
Aug 22 19:09:32.089: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 11.02049288s
Aug 22 19:09:34.161: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Pending", Reason="", readiness=false. Elapsed: 13.09281181s
Aug 22 19:09:36.325: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Running", Reason="", readiness=true. Elapsed: 15.25695152s
Aug 22 19:09:38.329: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.260788012s
STEP: Saw pod success
Aug 22 19:09:38.329: INFO: Pod "downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703" satisfied condition "success or failure"
Aug 22 19:09:38.332: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703 container client-container: 
STEP: delete the pod
Aug 22 19:09:38.997: INFO: Waiting for pod downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703 to disappear
Aug 22 19:09:39.190: INFO: Pod downwardapi-volume-135ec9e1-a8d4-40ac-9aea-96bf98404703 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:09:39.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7650" for this suite.

• [SLOW TEST:18.513 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1738,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:09:39.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 22 19:09:46.814: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:09:47.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2241" for this suite.

• [SLOW TEST:8.011 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1738,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:09:47.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-558
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 22 19:09:47.345: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 22 19:10:17.646: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.97 8081 | grep -v '^\s*$'] Namespace:pod-network-test-558 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:10:17.646: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:10:17.680910       6 log.go:172] (0xc0010326e0) (0xc001f7f7c0) Create stream
I0822 19:10:17.680940       6 log.go:172] (0xc0010326e0) (0xc001f7f7c0) Stream added, broadcasting: 1
I0822 19:10:17.683995       6 log.go:172] (0xc0010326e0) Reply frame received for 1
I0822 19:10:17.684023       6 log.go:172] (0xc0010326e0) (0xc001f7f860) Create stream
I0822 19:10:17.684032       6 log.go:172] (0xc0010326e0) (0xc001f7f860) Stream added, broadcasting: 3
I0822 19:10:17.685325       6 log.go:172] (0xc0010326e0) Reply frame received for 3
I0822 19:10:17.685380       6 log.go:172] (0xc0010326e0) (0xc00302cb40) Create stream
I0822 19:10:17.685399       6 log.go:172] (0xc0010326e0) (0xc00302cb40) Stream added, broadcasting: 5
I0822 19:10:17.687428       6 log.go:172] (0xc0010326e0) Reply frame received for 5
I0822 19:10:18.741760       6 log.go:172] (0xc0010326e0) Data frame received for 3
I0822 19:10:18.741806       6 log.go:172] (0xc001f7f860) (3) Data frame handling
I0822 19:10:18.741840       6 log.go:172] (0xc001f7f860) (3) Data frame sent
I0822 19:10:18.742074       6 log.go:172] (0xc0010326e0) Data frame received for 5
I0822 19:10:18.742130       6 log.go:172] (0xc00302cb40) (5) Data frame handling
I0822 19:10:18.742188       6 log.go:172] (0xc0010326e0) Data frame received for 3
I0822 19:10:18.742251       6 log.go:172] (0xc001f7f860) (3) Data frame handling
I0822 19:10:18.744409       6 log.go:172] (0xc0010326e0) Data frame received for 1
I0822 19:10:18.744457       6 log.go:172] (0xc001f7f7c0) (1) Data frame handling
I0822 19:10:18.744481       6 log.go:172] (0xc001f7f7c0) (1) Data frame sent
I0822 19:10:18.744514       6 log.go:172] (0xc0010326e0) (0xc001f7f7c0) Stream removed, broadcasting: 1
I0822 19:10:18.744553       6 log.go:172] (0xc0010326e0) Go away received
I0822 19:10:18.744937       6 log.go:172] (0xc0010326e0) (0xc001f7f7c0) Stream removed, broadcasting: 1
I0822 19:10:18.744976       6 log.go:172] (0xc0010326e0) (0xc001f7f860) Stream removed, broadcasting: 3
I0822 19:10:18.745008       6 log.go:172] (0xc0010326e0) (0xc00302cb40) Stream removed, broadcasting: 5
Aug 22 19:10:18.745: INFO: Found all expected endpoints: [netserver-0]
Aug 22 19:10:18.964: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.84 8081 | grep -v '^\s*$'] Namespace:pod-network-test-558 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:10:18.964: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:10:19.043107       6 log.go:172] (0xc001b76630) (0xc00302d360) Create stream
I0822 19:10:19.043144       6 log.go:172] (0xc001b76630) (0xc00302d360) Stream added, broadcasting: 1
I0822 19:10:19.045368       6 log.go:172] (0xc001b76630) Reply frame received for 1
I0822 19:10:19.045410       6 log.go:172] (0xc001b76630) (0xc0024a6c80) Create stream
I0822 19:10:19.045421       6 log.go:172] (0xc001b76630) (0xc0024a6c80) Stream added, broadcasting: 3
I0822 19:10:19.046225       6 log.go:172] (0xc001b76630) Reply frame received for 3
I0822 19:10:19.046273       6 log.go:172] (0xc001b76630) (0xc0028d8000) Create stream
I0822 19:10:19.046293       6 log.go:172] (0xc001b76630) (0xc0028d8000) Stream added, broadcasting: 5
I0822 19:10:19.046988       6 log.go:172] (0xc001b76630) Reply frame received for 5
I0822 19:10:20.094796       6 log.go:172] (0xc001b76630) Data frame received for 3
I0822 19:10:20.094839       6 log.go:172] (0xc0024a6c80) (3) Data frame handling
I0822 19:10:20.094865       6 log.go:172] (0xc0024a6c80) (3) Data frame sent
I0822 19:10:20.095007       6 log.go:172] (0xc001b76630) Data frame received for 5
I0822 19:10:20.095029       6 log.go:172] (0xc0028d8000) (5) Data frame handling
I0822 19:10:20.095061       6 log.go:172] (0xc001b76630) Data frame received for 3
I0822 19:10:20.095072       6 log.go:172] (0xc0024a6c80) (3) Data frame handling
I0822 19:10:20.096561       6 log.go:172] (0xc001b76630) Data frame received for 1
I0822 19:10:20.096589       6 log.go:172] (0xc00302d360) (1) Data frame handling
I0822 19:10:20.096625       6 log.go:172] (0xc00302d360) (1) Data frame sent
I0822 19:10:20.096654       6 log.go:172] (0xc001b76630) (0xc00302d360) Stream removed, broadcasting: 1
I0822 19:10:20.096714       6 log.go:172] (0xc001b76630) Go away received
I0822 19:10:20.096837       6 log.go:172] (0xc001b76630) (0xc00302d360) Stream removed, broadcasting: 1
I0822 19:10:20.096858       6 log.go:172] (0xc001b76630) (0xc0024a6c80) Stream removed, broadcasting: 3
I0822 19:10:20.096870       6 log.go:172] (0xc001b76630) (0xc0028d8000) Stream removed, broadcasting: 5
Aug 22 19:10:20.096: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:10:20.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-558" for this suite.

• [SLOW TEST:32.896 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1739,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:10:20.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-4d806c96-0d6f-4230-bbf8-955f5bdb1285
STEP: Creating a pod to test consume secrets
Aug 22 19:10:21.212: INFO: Waiting up to 5m0s for pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f" in namespace "secrets-9186" to be "success or failure"
Aug 22 19:10:21.520: INFO: Pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f": Phase="Pending", Reason="", readiness=false. Elapsed: 308.276044ms
Aug 22 19:10:23.524: INFO: Pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312496257s
Aug 22 19:10:25.652: INFO: Pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439813465s
Aug 22 19:10:27.724: INFO: Pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.512319521s
STEP: Saw pod success
Aug 22 19:10:27.724: INFO: Pod "pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f" satisfied condition "success or failure"
Aug 22 19:10:28.053: INFO: Trying to get logs from node jerma-worker pod pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f container secret-volume-test: 
STEP: delete the pod
Aug 22 19:10:28.761: INFO: Waiting for pod pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f to disappear
Aug 22 19:10:28.842: INFO: Pod pod-secrets-39474c43-d5be-4aff-9ec2-b8a2ec0e437f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:10:28.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9186" for this suite.

• [SLOW TEST:9.127 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1766,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:10:29.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 22 19:10:30.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:10:48.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6965" for this suite.

• [SLOW TEST:19.576 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":97,"skipped":1801,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:10:48.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9544, will wait for the garbage collector to delete the pods
Aug 22 19:11:01.066: INFO: Deleting Job.batch foo took: 6.342114ms
Aug 22 19:11:12.166: INFO: Terminating Job.batch foo pods took: 11.100243885s
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:11:51.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9544" for this suite.

• [SLOW TEST:62.999 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":98,"skipped":1810,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:11:51.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:11:53.920: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:11:56.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720313, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720313, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720313, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720313, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:11:59.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:12:10.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4549" for this suite.
STEP: Destroying namespace "webhook-4549-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.516 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":99,"skipped":1824,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:12:12.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 22 19:12:14.763: INFO: created pod pod-service-account-defaultsa
Aug 22 19:12:14.763: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 22 19:12:14.783: INFO: created pod pod-service-account-mountsa
Aug 22 19:12:14.783: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 22 19:12:14.914: INFO: created pod pod-service-account-nomountsa
Aug 22 19:12:14.914: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 22 19:12:14.939: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 22 19:12:14.939: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 22 19:12:14.979: INFO: created pod pod-service-account-mountsa-mountspec
Aug 22 19:12:14.979: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 22 19:12:15.516: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 22 19:12:15.516: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 22 19:12:15.582: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 22 19:12:15.582: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 22 19:12:15.612: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 22 19:12:15.612: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 22 19:12:15.750: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 22 19:12:15.750: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:12:15.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-854" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":100,"skipped":1828,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:12:16.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 22 19:12:19.127: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:12:41.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4941" for this suite.

• [SLOW TEST:25.511 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1838,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:12:41.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-5lq6
STEP: Creating a pod to test atomic-volume-subpath
Aug 22 19:12:43.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5lq6" in namespace "subpath-376" to be "success or failure"
Aug 22 19:12:43.311: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Pending", Reason="", readiness=false. Elapsed: 141.911758ms
Aug 22 19:12:45.320: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150511635s
Aug 22 19:12:47.623: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453504922s
Aug 22 19:12:49.881: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.711548703s
Aug 22 19:12:51.884: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 8.715284896s
Aug 22 19:12:54.175: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 11.005711424s
Aug 22 19:12:56.402: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 13.232405937s
Aug 22 19:12:58.405: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 15.235703231s
Aug 22 19:13:00.408: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 17.239235522s
Aug 22 19:13:02.527: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 19.357854689s
Aug 22 19:13:04.587: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 21.417594542s
Aug 22 19:13:06.791: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 23.62186747s
Aug 22 19:13:08.898: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 25.729271745s
Aug 22 19:13:10.903: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Running", Reason="", readiness=true. Elapsed: 27.73390037s
Aug 22 19:13:13.120: INFO: Pod "pod-subpath-test-downwardapi-5lq6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.951011728s
STEP: Saw pod success
Aug 22 19:13:13.120: INFO: Pod "pod-subpath-test-downwardapi-5lq6" satisfied condition "success or failure"
Aug 22 19:13:13.123: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-5lq6 container test-container-subpath-downwardapi-5lq6: 
STEP: delete the pod
Aug 22 19:13:13.199: INFO: Waiting for pod pod-subpath-test-downwardapi-5lq6 to disappear
Aug 22 19:13:13.670: INFO: Pod pod-subpath-test-downwardapi-5lq6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5lq6
Aug 22 19:13:13.670: INFO: Deleting pod "pod-subpath-test-downwardapi-5lq6" in namespace "subpath-376"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:13:13.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-376" for this suite.

• [SLOW TEST:31.778 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":102,"skipped":1853,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:13:13.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-3d94ce02-6638-479c-a775-cc1956ae0ad0
STEP: Creating a pod to test consume secrets
Aug 22 19:13:14.030: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e" in namespace "projected-1509" to be "success or failure"
Aug 22 19:13:14.055: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.820534ms
Aug 22 19:13:16.059: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029003862s
Aug 22 19:13:18.138: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108640277s
Aug 22 19:13:20.432: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402086809s
Aug 22 19:13:22.435: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.405073195s
STEP: Saw pod success
Aug 22 19:13:22.435: INFO: Pod "pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e" satisfied condition "success or failure"
Aug 22 19:13:22.437: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e container projected-secret-volume-test: 
STEP: delete the pod
Aug 22 19:13:22.988: INFO: Waiting for pod pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e to disappear
Aug 22 19:13:23.022: INFO: Pod pod-projected-secrets-2f9507ea-cc63-4755-91a8-1fa32754f48e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:13:23.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1509" for this suite.

• [SLOW TEST:9.539 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1853,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:13:23.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4631.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4631.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4631.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4631.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 19:13:52.147: INFO: DNS probes using dns-4631/dns-test-0e22a2f3-b1a8-4531-ab5d-83d32657ae67 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:13:53.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4631" for this suite.

• [SLOW TEST:31.240 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1872,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:13:54.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:13:55.358: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:13:57.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5521" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":105,"skipped":1873,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:13:58.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 22 19:13:59.937: INFO: Waiting up to 5m0s for pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6" in namespace "emptydir-8929" to be "success or failure"
Aug 22 19:14:00.031: INFO: Pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 93.424043ms
Aug 22 19:14:02.186: INFO: Pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249059128s
Aug 22 19:14:04.373: INFO: Pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435381208s
Aug 22 19:14:06.376: INFO: Pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.438422758s
STEP: Saw pod success
Aug 22 19:14:06.376: INFO: Pod "pod-f83b1547-a75d-4708-8ea9-01208276f6e6" satisfied condition "success or failure"
Aug 22 19:14:06.378: INFO: Trying to get logs from node jerma-worker pod pod-f83b1547-a75d-4708-8ea9-01208276f6e6 container test-container: 
STEP: delete the pod
Aug 22 19:14:06.985: INFO: Waiting for pod pod-f83b1547-a75d-4708-8ea9-01208276f6e6 to disappear
Aug 22 19:14:07.128: INFO: Pod pod-f83b1547-a75d-4708-8ea9-01208276f6e6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:14:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8929" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:14:07.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:14:08.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184" in namespace "projected-273" to be "success or failure"
Aug 22 19:14:09.087: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184": Phase="Pending", Reason="", readiness=false. Elapsed: 181.116071ms
Aug 22 19:14:11.091: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185344287s
Aug 22 19:14:13.354: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448601884s
Aug 22 19:14:15.451: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545109617s
Aug 22 19:14:17.454: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.548871552s
STEP: Saw pod success
Aug 22 19:14:17.454: INFO: Pod "downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184" satisfied condition "success or failure"
Aug 22 19:14:17.456: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184 container client-container: 
STEP: delete the pod
Aug 22 19:14:17.535: INFO: Waiting for pod downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184 to disappear
Aug 22 19:14:17.604: INFO: Pod downwardapi-volume-588bd150-87d4-4350-b0ec-c7289dc75184 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:14:17.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-273" for this suite.

• [SLOW TEST:10.440 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1923,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:14:17.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 19:14:17.688: INFO: Waiting up to 5m0s for pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858" in namespace "downward-api-5966" to be "success or failure"
Aug 22 19:14:17.690: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248768ms
Aug 22 19:14:19.694: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006453664s
Aug 22 19:14:21.697: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009403444s
Aug 22 19:14:23.726: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037707203s
Aug 22 19:14:25.728: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040588075s
STEP: Saw pod success
Aug 22 19:14:25.728: INFO: Pod "downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858" satisfied condition "success or failure"
Aug 22 19:14:25.731: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858 container dapi-container: 
STEP: delete the pod
Aug 22 19:14:25.793: INFO: Waiting for pod downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858 to disappear
Aug 22 19:14:25.887: INFO: Pod downward-api-f06cd45e-8ea9-4df3-8f6c-4e18d4fae858 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:14:25.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5966" for this suite.

• [SLOW TEST:8.325 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1930,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:14:25.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 22 19:14:27.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9947'
Aug 22 19:14:28.618: INFO: stderr: ""
Aug 22 19:14:28.618: INFO: stdout: "pod/pause created\n"
Aug 22 19:14:28.618: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 22 19:14:28.618: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9947" to be "running and ready"
Aug 22 19:14:28.780: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 162.065977ms
Aug 22 19:14:30.784: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165524942s
Aug 22 19:14:32.787: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168527504s
Aug 22 19:14:34.790: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.172049777s
Aug 22 19:14:34.790: INFO: Pod "pause" satisfied condition "running and ready"
Aug 22 19:14:34.790: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 22 19:14:34.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9947'
Aug 22 19:14:34.907: INFO: stderr: ""
Aug 22 19:14:34.907: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 22 19:14:34.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9947'
Aug 22 19:14:34.994: INFO: stderr: ""
Aug 22 19:14:34.994: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 22 19:14:34.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9947'
Aug 22 19:14:35.096: INFO: stderr: ""
Aug 22 19:14:35.096: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 22 19:14:35.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9947'
Aug 22 19:14:35.206: INFO: stderr: ""
Aug 22 19:14:35.206: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 22 19:14:35.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9947'
Aug 22 19:14:35.462: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:14:35.462: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 22 19:14:35.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9947'
Aug 22 19:14:35.611: INFO: stderr: "No resources found in kubectl-9947 namespace.\n"
Aug 22 19:14:35.611: INFO: stdout: ""
Aug 22 19:14:35.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9947 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 22 19:14:36.038: INFO: stderr: ""
Aug 22 19:14:36.038: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:14:36.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9947" for this suite.

• [SLOW TEST:10.603 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":109,"skipped":1941,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:14:36.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 22 19:14:38.166: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-481" to be "success or failure"
Aug 22 19:14:38.602: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 436.013489ms
Aug 22 19:14:40.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463926841s
Aug 22 19:14:42.827: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661495271s
Aug 22 19:14:45.025: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.859343125s
Aug 22 19:14:47.527: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.360618848s
Aug 22 19:14:49.798: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.631746327s
Aug 22 19:14:51.893: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.727484896s
Aug 22 19:14:53.911: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.745492384s
STEP: Saw pod success
Aug 22 19:14:53.912: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 22 19:14:53.914: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 22 19:14:54.980: INFO: Waiting for pod pod-host-path-test to disappear
Aug 22 19:14:55.068: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:14:55.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-481" for this suite.

• [SLOW TEST:18.534 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1946,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:14:55.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 22 19:14:56.218: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545270 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 22 19:14:56.219: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545270 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 22 19:15:06.225: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545310 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 22 19:15:06.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545310 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 22 19:15:16.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545336 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 22 19:15:16.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545336 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 22 19:15:26.247: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545365 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 22 19:15:26.247: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-a 0280da9a-d374-493d-a3a2-c133ed54130a 2545365 0 2020-08-22 19:14:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 22 19:15:36.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-b f9960841-2d7d-417d-a6a7-569f363b7b0a 2545395 0 2020-08-22 19:15:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 22 19:15:36.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-b f9960841-2d7d-417d-a6a7-569f363b7b0a 2545395 0 2020-08-22 19:15:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 22 19:15:46.325: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-b f9960841-2d7d-417d-a6a7-569f363b7b0a 2545422 0 2020-08-22 19:15:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 22 19:15:46.325: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-configmap-b f9960841-2d7d-417d-a6a7-569f363b7b0a 2545422 0 2020-08-22 19:15:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:15:56.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7158" for this suite.

• [SLOW TEST:61.259 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":111,"skipped":1955,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:15:56.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:15:56.472: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:15:57.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6220" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":112,"skipped":1955,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:15:57.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:15:57.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 22 19:15:57.790: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:15:57Z generation:1 name:name1 resourceVersion:2545480 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9e66dbd5-32c5-456d-ac29-69b657f466ce] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 22 19:16:07.795: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:16:07Z generation:1 name:name2 resourceVersion:2545517 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:99653b1e-3b63-4ebe-bdce-c50981afe51d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 22 19:16:17.801: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:15:57Z generation:2 name:name1 resourceVersion:2545545 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9e66dbd5-32c5-456d-ac29-69b657f466ce] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 22 19:16:28.008: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:16:07Z generation:2 name:name2 resourceVersion:2545573 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:99653b1e-3b63-4ebe-bdce-c50981afe51d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 22 19:16:38.072: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:15:57Z generation:2 name:name1 resourceVersion:2545599 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9e66dbd5-32c5-456d-ac29-69b657f466ce] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 22 19:16:48.079: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-22T19:16:07Z generation:2 name:name2 resourceVersion:2545629 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:99653b1e-3b63-4ebe-bdce-c50981afe51d] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:16:58.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4973" for this suite.

• [SLOW TEST:61.810 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":113,"skipped":1962,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:16:58.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 22 19:17:07.405: INFO: Successfully updated pod "annotationupdate7c260136-f603-4987-a29b-a6ca2080b324"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:17:09.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5624" for this suite.

• [SLOW TEST:10.705 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1988,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:17:09.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7426.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7426.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 19:17:27.425: INFO: DNS probes using dns-7426/dns-test-f545dd12-2868-41e0-954b-371f8c4bf6cf succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:17:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7426" for this suite.

• [SLOW TEST:18.531 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":115,"skipped":1992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:17:28.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:17:29.795: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 22 19:17:35.009: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 22 19:17:39.383: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 19:17:40.569: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-2871 /apis/apps/v1/namespaces/deployment-2871/deployments/test-cleanup-deployment 817a77df-37d8-4862-9c4d-c4b60757a22e 2545851 1 2020-08-22 19:17:39 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b6e598  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Aug 22 19:17:41.469: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-2871 /apis/apps/v1/namespaces/deployment-2871/replicasets/test-cleanup-deployment-55ffc6b7b6 bda7f97d-3b4e-4f97-9931-05900050b2b7 2545859 1 2020-08-22 19:17:39 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 817a77df-37d8-4862-9c4d-c4b60757a22e 0xc0031b8c77 0xc0031b8c78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031b8ce8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:17:41.469: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 22 19:17:41.469: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-2871 /apis/apps/v1/namespaces/deployment-2871/replicasets/test-cleanup-controller 554d53bb-a2f0-45ea-81f8-0559d909d076 2545852 1 2020-08-22 19:17:29 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 817a77df-37d8-4862-9c4d-c4b60757a22e 0xc0031b8b77 0xc0031b8b78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031b8bd8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 22 19:17:42.117: INFO: Pod "test-cleanup-controller-nlgfr" is available:
&Pod{ObjectMeta:{test-cleanup-controller-nlgfr test-cleanup-controller- deployment-2871 /api/v1/namespaces/deployment-2871/pods/test-cleanup-controller-nlgfr e91f6c48-70d5-469f-b2db-3e3c233417e0 2545839 0 2020-08-22 19:17:29 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 554d53bb-a2f0-45ea-81f8-0559d909d076 0xc0031b9177 0xc0031b9178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8hbmm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8hbmm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8hbmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:17:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:17:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:17:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:17:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.98,StartTime:2020-08-22 19:17:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 19:17:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://53a50c350406c4d26e66ce021768a9cdf5148cc129cb0fd349141e4a0a770e5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 19:17:42.117: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-6kgrf" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-6kgrf test-cleanup-deployment-55ffc6b7b6- deployment-2871 /api/v1/namespaces/deployment-2871/pods/test-cleanup-deployment-55ffc6b7b6-6kgrf c9b2432d-bb6f-4f4b-b44e-5afd64031382 2545858 0 2020-08-22 19:17:40 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 bda7f97d-3b4e-4f97-9931-05900050b2b7 0xc0031b9307 0xc0031b9308}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8hbmm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8hbmm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8hbmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:17:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:17:42.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2871" for this suite.

• [SLOW TEST:14.641 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":116,"skipped":2022,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:17:42.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 22 19:17:43.785: INFO: >>> kubeConfig: /root/.kube/config
Aug 22 19:17:47.486: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:18:00.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7045" for this suite.

• [SLOW TEST:17.480 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":117,"skipped":2048,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:18:00.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:18:01.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 22 19:18:03.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2830 create -f -'
Aug 22 19:18:27.840: INFO: stderr: ""
Aug 22 19:18:27.841: INFO: stdout: "e2e-test-crd-publish-openapi-1693-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 22 19:18:27.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2830 delete e2e-test-crd-publish-openapi-1693-crds test-cr'
Aug 22 19:18:28.244: INFO: stderr: ""
Aug 22 19:18:28.244: INFO: stdout: "e2e-test-crd-publish-openapi-1693-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 22 19:18:28.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2830 apply -f -'
Aug 22 19:18:28.520: INFO: stderr: ""
Aug 22 19:18:28.520: INFO: stdout: "e2e-test-crd-publish-openapi-1693-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 22 19:18:28.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2830 delete e2e-test-crd-publish-openapi-1693-crds test-cr'
Aug 22 19:18:28.656: INFO: stderr: ""
Aug 22 19:18:28.656: INFO: stdout: "e2e-test-crd-publish-openapi-1693-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 22 19:18:28.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1693-crds'
Aug 22 19:18:29.456: INFO: stderr: ""
Aug 22 19:18:29.456: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1693-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:18:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2830" for this suite.

• [SLOW TEST:31.414 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":118,"skipped":2062,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:18:31.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 22 19:18:41.590: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2079 pod-service-account-7c2fe75b-8cbd-4bb3-9246-95d799a33bdd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 22 19:18:41.789: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2079 pod-service-account-7c2fe75b-8cbd-4bb3-9246-95d799a33bdd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 22 19:18:41.977: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2079 pod-service-account-7c2fe75b-8cbd-4bb3-9246-95d799a33bdd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:18:42.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2079" for this suite.

• [SLOW TEST:10.581 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":119,"skipped":2070,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:18:42.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-de304482-2158-4875-82b5-b46643692f54
STEP: Creating configMap with name cm-test-opt-upd-cc4cb8b2-92a3-4966-ad01-f40f7592c857
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-de304482-2158-4875-82b5-b46643692f54
STEP: Updating configmap cm-test-opt-upd-cc4cb8b2-92a3-4966-ad01-f40f7592c857
STEP: Creating configMap with name cm-test-opt-create-5a26219f-9b78-46db-91b4-01359b2a099a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:20:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6909" for this suite.

• [SLOW TEST:93.222 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2072,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:20:15.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 19:20:16.470: INFO: Waiting up to 5m0s for pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5" in namespace "downward-api-6113" to be "success or failure"
Aug 22 19:20:16.509: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.049646ms
Aug 22 19:20:19.055: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58531808s
Aug 22 19:20:21.399: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.92883858s
Aug 22 19:20:23.801: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.33068845s
Aug 22 19:20:25.804: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.334037357s
STEP: Saw pod success
Aug 22 19:20:25.804: INFO: Pod "downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5" satisfied condition "success or failure"
Aug 22 19:20:25.807: INFO: Trying to get logs from node jerma-worker2 pod downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5 container dapi-container: 
STEP: delete the pod
Aug 22 19:20:26.005: INFO: Waiting for pod downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5 to disappear
Aug 22 19:20:26.080: INFO: Pod downward-api-137e34e7-969d-4e0f-bbc5-9228f0dd98b5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:20:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6113" for this suite.

• [SLOW TEST:10.931 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2085,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:20:26.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-fabb888d-f86b-4721-a465-682030747c4f
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:20:26.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5533" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":122,"skipped":2091,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:20:26.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:20:28.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:20:30.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:20:33.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:20:34.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:20:36.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720828, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:20:39.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 22 19:20:39.725: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:20:39.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8913" for this suite.
STEP: Destroying namespace "webhook-8913-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.023 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":123,"skipped":2106,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:20:39.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-492dee1c-b4fd-46fc-8ef6-7bcd29928c73
STEP: Creating a pod to test consume secrets
Aug 22 19:20:40.035: INFO: Waiting up to 5m0s for pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2" in namespace "secrets-6260" to be "success or failure"
Aug 22 19:20:40.129: INFO: Pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 93.857393ms
Aug 22 19:20:42.132: INFO: Pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097422845s
Aug 22 19:20:44.261: INFO: Pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.226301725s
Aug 22 19:20:46.274: INFO: Pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238946051s
STEP: Saw pod success
Aug 22 19:20:46.274: INFO: Pod "pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2" satisfied condition "success or failure"
Aug 22 19:20:46.353: INFO: Trying to get logs from node jerma-worker pod pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2 container secret-volume-test: 
STEP: delete the pod
Aug 22 19:20:46.587: INFO: Waiting for pod pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2 to disappear
Aug 22 19:20:46.622: INFO: Pod pod-secrets-35652b65-f2b8-4000-9001-f877dbd0c5b2 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:20:46.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6260" for this suite.

• [SLOW TEST:6.731 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2111,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:20:46.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:20:47.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 22 19:20:49.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 create -f -'
Aug 22 19:21:00.542: INFO: stderr: ""
Aug 22 19:21:00.542: INFO: stdout: "e2e-test-crd-publish-openapi-6832-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 22 19:21:00.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 delete e2e-test-crd-publish-openapi-6832-crds test-foo'
Aug 22 19:21:00.689: INFO: stderr: ""
Aug 22 19:21:00.689: INFO: stdout: "e2e-test-crd-publish-openapi-6832-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 22 19:21:00.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 apply -f -'
Aug 22 19:21:03.046: INFO: stderr: ""
Aug 22 19:21:03.046: INFO: stdout: "e2e-test-crd-publish-openapi-6832-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 22 19:21:03.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 delete e2e-test-crd-publish-openapi-6832-crds test-foo'
Aug 22 19:21:03.452: INFO: stderr: ""
Aug 22 19:21:03.452: INFO: stdout: "e2e-test-crd-publish-openapi-6832-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 22 19:21:03.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 create -f -'
Aug 22 19:21:04.495: INFO: rc: 1
Aug 22 19:21:04.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 apply -f -'
Aug 22 19:21:04.858: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 22 19:21:04.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 create -f -'
Aug 22 19:21:05.080: INFO: rc: 1
Aug 22 19:21:05.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-215 apply -f -'
Aug 22 19:21:05.322: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 22 19:21:05.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6832-crds'
Aug 22 19:21:05.557: INFO: stderr: ""
Aug 22 19:21:05.557: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6832-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 22 19:21:05.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6832-crds.metadata'
Aug 22 19:21:05.784: INFO: stderr: ""
Aug 22 19:21:05.784: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6832-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 22 19:21:05.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6832-crds.spec'
Aug 22 19:21:06.060: INFO: stderr: ""
Aug 22 19:21:06.060: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6832-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 22 19:21:06.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6832-crds.spec.bars'
Aug 22 19:21:06.314: INFO: stderr: ""
Aug 22 19:21:06.314: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6832-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 22 19:21:06.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6832-crds.spec.bars2'
Aug 22 19:21:06.570: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:21:09.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-215" for this suite.

• [SLOW TEST:22.820 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":125,"skipped":2144,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:21:09.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 19:21:09.497: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 19:21:09.516: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 19:21:09.520: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 19:21:09.525: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.525: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:21:09.525: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.525: INFO: 	Container app ready: true, restart count 0
Aug 22 19:21:09.525: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.525: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 19:21:09.525: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 19:21:09.551: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.551: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 19:21:09.551: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.551: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:21:09.551: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:21:09.551: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c60e26cc-21b6-4bdb-8d4c-1620063afa65 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c60e26cc-21b6-4bdb-8d4c-1620063afa65 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c60e26cc-21b6-4bdb-8d4c-1620063afa65
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:21:29.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9469" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:20.000 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":126,"skipped":2157,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:21:29.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:21:44.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7221" for this suite.

• [SLOW TEST:15.120 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":127,"skipped":2167,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:21:44.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-4e2c13db-3654-4238-86c3-3bcd2a3aecf6
STEP: Creating a pod to test consume configMaps
Aug 22 19:21:45.623: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f" in namespace "projected-9784" to be "success or failure"
Aug 22 19:21:45.867: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 244.202449ms
Aug 22 19:21:48.100: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476991102s
Aug 22 19:21:50.196: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573278103s
Aug 22 19:21:52.201: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57746656s
Aug 22 19:21:54.418: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Running", Reason="", readiness=true. Elapsed: 8.794872452s
Aug 22 19:21:56.422: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798551656s
STEP: Saw pod success
Aug 22 19:21:56.422: INFO: Pod "pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f" satisfied condition "success or failure"
Aug 22 19:21:56.424: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 19:21:56.650: INFO: Waiting for pod pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f to disappear
Aug 22 19:21:56.653: INFO: Pod pod-projected-configmaps-b1b9ab9d-6b35-487a-a3d8-e8a445ed3f3f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:21:56.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9784" for this suite.

• [SLOW TEST:12.088 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:21:56.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:21:57.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:22:03.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4221" for this suite.

• [SLOW TEST:6.711 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2225,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:22:03.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:22:04.374: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:22:06.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:22:08.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733720924, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:22:11.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:22:12.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4962" for this suite.
STEP: Destroying namespace "webhook-4962-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.959 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":130,"skipped":2264,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:22:14.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:22:27.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3885" for this suite.

• [SLOW TEST:13.663 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":131,"skipped":2264,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:22:27.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:22:46.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6095" for this suite.

• [SLOW TEST:18.985 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":132,"skipped":2266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:22:46.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e23f336b-4bca-4d00-a74d-1da9e1bd68c5
STEP: Creating a pod to test consume secrets
Aug 22 19:22:48.687: INFO: Waiting up to 5m0s for pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2" in namespace "secrets-9912" to be "success or failure"
Aug 22 19:22:48.994: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2": Phase="Pending", Reason="", readiness=false. Elapsed: 306.756583ms
Aug 22 19:22:50.998: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310371201s
Aug 22 19:22:53.070: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383183734s
Aug 22 19:22:55.754: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.066725093s
Aug 22 19:22:58.047: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.359745443s
STEP: Saw pod success
Aug 22 19:22:58.047: INFO: Pod "pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2" satisfied condition "success or failure"
Aug 22 19:22:58.050: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2 container secret-volume-test: 
STEP: delete the pod
Aug 22 19:22:59.001: INFO: Waiting for pod pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2 to disappear
Aug 22 19:22:59.038: INFO: Pod pod-secrets-bb8bc334-1e54-4c3d-8ca1-9773e19241d2 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:22:59.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9912" for this suite.

• [SLOW TEST:12.328 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2293,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:22:59.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-6704/configmap-test-3a434ed8-a610-444a-9ab6-dcbb88e06185
STEP: Creating a pod to test consume configMaps
Aug 22 19:23:00.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc" in namespace "configmap-6704" to be "success or failure"
Aug 22 19:23:00.403: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 33.91522ms
Aug 22 19:23:02.888: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519065841s
Aug 22 19:23:05.071: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.702008455s
Aug 22 19:23:07.254: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884705045s
Aug 22 19:23:09.336: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966683314s
Aug 22 19:23:12.173: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Running", Reason="", readiness=true. Elapsed: 11.80412485s
Aug 22 19:23:14.340: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.970310778s
STEP: Saw pod success
Aug 22 19:23:14.340: INFO: Pod "pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc" satisfied condition "success or failure"
Aug 22 19:23:14.341: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc container env-test: 
STEP: delete the pod
Aug 22 19:23:14.839: INFO: Waiting for pod pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc to disappear
Aug 22 19:23:14.889: INFO: Pod pod-configmaps-35781c09-30e2-4ea1-aa0f-f34ab61ed5bc no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:23:14.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6704" for this suite.

• [SLOW TEST:15.587 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2332,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:23:14.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 22 19:23:16.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9106 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 22 19:23:16.394: INFO: stderr: ""
Aug 22 19:23:16.394: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 22 19:23:16.394: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 22 19:23:16.394: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9106" to be "running and ready, or succeeded"
Aug 22 19:23:16.878: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 484.22815ms
Aug 22 19:23:18.961: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.567146255s
Aug 22 19:23:21.185: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791291804s
Aug 22 19:23:23.514: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120513645s
Aug 22 19:23:25.904: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.510376189s
Aug 22 19:23:28.473: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.078723787s
Aug 22 19:23:30.544: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 14.149766837s
Aug 22 19:23:30.544: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 22 19:23:30.544: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 22 19:23:30.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106'
Aug 22 19:23:30.863: INFO: stderr: ""
Aug 22 19:23:30.863: INFO: stdout: "I0822 19:23:28.643702       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/zh5p 327\nI0822 19:23:28.843861       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/m6k 437\nI0822 19:23:29.043869       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/6nvq 385\nI0822 19:23:29.243893       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/ldv 547\nI0822 19:23:29.443853       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/kbr 293\nI0822 19:23:29.643899       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/rvd 261\nI0822 19:23:29.843891       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/p6hs 327\nI0822 19:23:30.043878       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/8kz9 249\nI0822 19:23:30.243907       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/75xh 589\nI0822 19:23:30.443845       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/g7zx 297\nI0822 19:23:30.643885       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/j6f 584\nI0822 19:23:30.843870       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/t8hs 422\n"
STEP: limiting log lines
Aug 22 19:23:30.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106 --tail=1'
Aug 22 19:23:31.005: INFO: stderr: ""
Aug 22 19:23:31.005: INFO: stdout: "I0822 19:23:30.843870       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/t8hs 422\n"
Aug 22 19:23:31.005: INFO: got output "I0822 19:23:30.843870       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/t8hs 422\n"
STEP: limiting log bytes
Aug 22 19:23:31.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106 --limit-bytes=1'
Aug 22 19:23:31.277: INFO: stderr: ""
Aug 22 19:23:31.277: INFO: stdout: "I"
Aug 22 19:23:31.277: INFO: got output "I"
STEP: exposing timestamps
Aug 22 19:23:31.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106 --tail=1 --timestamps'
Aug 22 19:23:31.399: INFO: stderr: ""
Aug 22 19:23:31.399: INFO: stdout: "2020-08-22T19:23:31.268687559Z I0822 19:23:31.243900       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/dgh5 567\n"
Aug 22 19:23:31.399: INFO: got output "2020-08-22T19:23:31.268687559Z I0822 19:23:31.243900       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/dgh5 567\n"
STEP: restricting to a time range
Aug 22 19:23:33.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106 --since=1s'
Aug 22 19:23:35.776: INFO: stderr: ""
Aug 22 19:23:35.776: INFO: stdout: "I0822 19:23:34.843920       1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/wdns 593\nI0822 19:23:35.043880       1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/h8v 440\nI0822 19:23:35.243871       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/kube-system/pods/psj 227\nI0822 19:23:35.443925       1 logs_generator.go:76] 34 POST /api/v1/namespaces/kube-system/pods/7sh 324\nI0822 19:23:35.643853       1 logs_generator.go:76] 35 PUT /api/v1/namespaces/ns/pods/dhzl 520\n"
Aug 22 19:23:35.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9106 --since=24h'
Aug 22 19:23:36.048: INFO: stderr: ""
Aug 22 19:23:36.048: INFO: stdout: "I0822 19:23:28.643702       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/zh5p 327\nI0822 19:23:28.843861       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/m6k 437\nI0822 19:23:29.043869       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/6nvq 385\nI0822 19:23:29.243893       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/ldv 547\nI0822 19:23:29.443853       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/kbr 293\nI0822 19:23:29.643899       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/rvd 261\nI0822 19:23:29.843891       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/p6hs 327\nI0822 19:23:30.043878       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/8kz9 249\nI0822 19:23:30.243907       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/75xh 589\nI0822 19:23:30.443845       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/g7zx 297\nI0822 19:23:30.643885       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/j6f 584\nI0822 19:23:30.843870       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/t8hs 422\nI0822 19:23:31.043850       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/xnb 412\nI0822 19:23:31.243900       1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/dgh5 567\nI0822 19:23:31.443930       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/zgz 331\nI0822 19:23:31.647456       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/kwh 488\nI0822 19:23:31.843884       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/nxln 424\nI0822 19:23:32.043877       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/8ws 361\nI0822 19:23:32.243878       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/lrtf 529\nI0822 19:23:32.443918       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/nl4 424\nI0822 19:23:32.643953       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/6r2q 209\nI0822 19:23:32.843823       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/b48k 418\nI0822 19:23:33.043890       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/xrqc 298\nI0822 19:23:33.244661       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/q5jn 520\nI0822 19:23:33.443866       1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/6vl 481\nI0822 19:23:33.643823       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/mqh 378\nI0822 19:23:33.843888       1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/hn5 503\nI0822 19:23:34.043868       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/8wq 528\nI0822 19:23:34.243885       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/4z2s 425\nI0822 19:23:34.443859       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/lsl7 202\nI0822 19:23:34.643879       1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/5sjg 449\nI0822 19:23:34.843920       1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/wdns 593\nI0822 19:23:35.043880       1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/h8v 440\nI0822 19:23:35.243871       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/kube-system/pods/psj 227\nI0822 19:23:35.443925       1 logs_generator.go:76] 34 POST /api/v1/namespaces/kube-system/pods/7sh 324\nI0822 19:23:35.643853       1 logs_generator.go:76] 35 PUT /api/v1/namespaces/ns/pods/dhzl 520\nI0822 19:23:35.843847       1 logs_generator.go:76] 36 GET /api/v1/namespaces/default/pods/5gt4 462\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 22 19:23:36.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9106'
Aug 22 19:23:52.264: INFO: stderr: ""
Aug 22 19:23:52.264: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:23:52.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9106" for this suite.

• [SLOW TEST:37.676 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":135,"skipped":2348,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:23:52.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:23:54.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2" in namespace "projected-8928" to be "success or failure"
Aug 22 19:23:55.071: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Pending", Reason="", readiness=false. Elapsed: 959.04792ms
Aug 22 19:23:57.131: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01937452s
Aug 22 19:23:59.511: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.399461959s
Aug 22 19:24:01.689: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.576907873s
Aug 22 19:24:03.718: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.606143969s
Aug 22 19:24:06.231: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Running", Reason="", readiness=true. Elapsed: 12.118647655s
Aug 22 19:24:08.605: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.492700146s
STEP: Saw pod success
Aug 22 19:24:08.605: INFO: Pod "downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2" satisfied condition "success or failure"
Aug 22 19:24:08.608: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2 container client-container: 
STEP: delete the pod
Aug 22 19:24:10.341: INFO: Waiting for pod downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2 to disappear
Aug 22 19:24:10.343: INFO: Pod downwardapi-volume-cec3bb7f-59dc-47a7-9bf7-75936e0364f2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:24:10.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8928" for this suite.

• [SLOW TEST:17.806 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2368,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:24:10.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 19:24:11.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7252'
Aug 22 19:24:11.832: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 19:24:11.832: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Aug 22 19:24:12.043: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 22 19:24:12.712: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 22 19:24:12.793: INFO: scanned /root for discovery docs: 
Aug 22 19:24:12.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7252'
Aug 22 19:24:39.478: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 22 19:24:39.478: INFO: stdout: "Created e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf\nScaling up e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 22 19:24:39.478: INFO: stdout: "Created e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf\nScaling up e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 22 19:24:39.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7252'
Aug 22 19:24:39.965: INFO: stderr: ""
Aug 22 19:24:39.965: INFO: stdout: "e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf-j4jvl e2e-test-httpd-rc-vbx42 "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Aug 22 19:24:44.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7252'
Aug 22 19:24:45.153: INFO: stderr: ""
Aug 22 19:24:45.153: INFO: stdout: "e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf-j4jvl "
Aug 22 19:24:45.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf-j4jvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7252'
Aug 22 19:24:45.248: INFO: stderr: ""
Aug 22 19:24:45.248: INFO: stdout: "true"
Aug 22 19:24:45.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf-j4jvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7252'
Aug 22 19:24:45.333: INFO: stderr: ""
Aug 22 19:24:45.333: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 22 19:24:45.333: INFO: e2e-test-httpd-rc-e8e9078ecbb8c83e4fa11a711dcb47cf-j4jvl is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 22 19:24:45.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7252'
Aug 22 19:24:45.598: INFO: stderr: ""
Aug 22 19:24:45.598: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:24:45.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7252" for this suite.

• [SLOW TEST:35.446 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":137,"skipped":2415,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:24:45.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 22 19:24:54.182: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:24:55.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7272" for this suite.

• [SLOW TEST:9.584 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2447,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:24:55.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3990
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3990
STEP: creating replication controller externalsvc in namespace services-3990
I0822 19:24:57.554550       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3990, replica count: 2
I0822 19:25:00.605072       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:25:03.605264       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:25:06.605481       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 22 19:25:06.743: INFO: Creating new exec pod
Aug 22 19:25:12.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3990 execpodm5g79 -- /bin/sh -x -c nslookup nodeport-service'
Aug 22 19:25:13.154: INFO: stderr: "I0822 19:25:13.066038    2443 log.go:172] (0xc000118840) (0xc000443b80) Create stream\nI0822 19:25:13.066083    2443 log.go:172] (0xc000118840) (0xc000443b80) Stream added, broadcasting: 1\nI0822 19:25:13.067805    2443 log.go:172] (0xc000118840) Reply frame received for 1\nI0822 19:25:13.067847    2443 log.go:172] (0xc000118840) (0xc000443d60) Create stream\nI0822 19:25:13.067860    2443 log.go:172] (0xc000118840) (0xc000443d60) Stream added, broadcasting: 3\nI0822 19:25:13.068815    2443 log.go:172] (0xc000118840) Reply frame received for 3\nI0822 19:25:13.068859    2443 log.go:172] (0xc000118840) (0xc000a94000) Create stream\nI0822 19:25:13.068874    2443 log.go:172] (0xc000118840) (0xc000a94000) Stream added, broadcasting: 5\nI0822 19:25:13.069622    2443 log.go:172] (0xc000118840) Reply frame received for 5\nI0822 19:25:13.137910    2443 log.go:172] (0xc000118840) Data frame received for 5\nI0822 19:25:13.137941    2443 log.go:172] (0xc000a94000) (5) Data frame handling\nI0822 19:25:13.137957    2443 log.go:172] (0xc000a94000) (5) Data frame sent\n+ nslookup nodeport-service\nI0822 19:25:13.142038    2443 log.go:172] (0xc000118840) Data frame received for 3\nI0822 19:25:13.142058    2443 log.go:172] (0xc000443d60) (3) Data frame handling\nI0822 19:25:13.142072    2443 log.go:172] (0xc000443d60) (3) Data frame sent\nI0822 19:25:13.142838    2443 log.go:172] (0xc000118840) Data frame received for 3\nI0822 19:25:13.142853    2443 log.go:172] (0xc000443d60) (3) Data frame handling\nI0822 19:25:13.142876    2443 log.go:172] (0xc000443d60) (3) Data frame sent\nI0822 19:25:13.143220    2443 log.go:172] (0xc000118840) Data frame received for 3\nI0822 19:25:13.143239    2443 log.go:172] (0xc000118840) Data frame received for 5\nI0822 19:25:13.143261    2443 log.go:172] (0xc000a94000) (5) Data frame handling\nI0822 19:25:13.143287    2443 log.go:172] (0xc000443d60) (3) Data frame handling\nI0822 19:25:13.144456    2443 log.go:172] (0xc000118840) Data frame received for 1\nI0822 19:25:13.144476    2443 log.go:172] (0xc000443b80) (1) Data frame handling\nI0822 19:25:13.144487    2443 log.go:172] (0xc000443b80) (1) Data frame sent\nI0822 19:25:13.144503    2443 log.go:172] (0xc000118840) (0xc000443b80) Stream removed, broadcasting: 1\nI0822 19:25:13.144522    2443 log.go:172] (0xc000118840) Go away received\nI0822 19:25:13.144960    2443 log.go:172] (0xc000118840) (0xc000443b80) Stream removed, broadcasting: 1\nI0822 19:25:13.144982    2443 log.go:172] (0xc000118840) (0xc000443d60) Stream removed, broadcasting: 3\nI0822 19:25:13.144994    2443 log.go:172] (0xc000118840) (0xc000a94000) Stream removed, broadcasting: 5\n"
Aug 22 19:25:13.154: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3990.svc.cluster.local\tcanonical name = externalsvc.services-3990.svc.cluster.local.\nName:\texternalsvc.services-3990.svc.cluster.local\nAddress: 10.97.49.95\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3990, will wait for the garbage collector to delete the pods
Aug 22 19:25:13.213: INFO: Deleting ReplicationController externalsvc took: 6.29068ms
Aug 22 19:25:13.614: INFO: Terminating ReplicationController externalsvc pods took: 400.761318ms
Aug 22 19:25:24.737: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:25:25.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3990" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:31.368 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":139,"skipped":2456,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:25:26.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 22 19:25:28.128: INFO: Waiting up to 5m0s for pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b" in namespace "emptydir-5040" to be "success or failure"
Aug 22 19:25:28.473: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Pending", Reason="", readiness=false. Elapsed: 344.636221ms
Aug 22 19:25:30.476: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347517803s
Aug 22 19:25:32.689: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.560049196s
Aug 22 19:25:34.922: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.793935379s
Aug 22 19:25:37.030: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.901675582s
Aug 22 19:25:39.035: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.906077515s
STEP: Saw pod success
Aug 22 19:25:39.035: INFO: Pod "pod-7aa2b731-d9c8-40c7-b985-482e04389f8b" satisfied condition "success or failure"
Aug 22 19:25:39.037: INFO: Trying to get logs from node jerma-worker2 pod pod-7aa2b731-d9c8-40c7-b985-482e04389f8b container test-container: 
STEP: delete the pod
Aug 22 19:25:39.110: INFO: Waiting for pod pod-7aa2b731-d9c8-40c7-b985-482e04389f8b to disappear
Aug 22 19:25:39.144: INFO: Pod pod-7aa2b731-d9c8-40c7-b985-482e04389f8b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:25:39.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5040" for this suite.

• [SLOW TEST:12.558 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2464,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:25:39.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 19:25:39.655: INFO: Waiting up to 5m0s for pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47" in namespace "downward-api-3980" to be "success or failure"
Aug 22 19:25:40.114: INFO: Pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47": Phase="Pending", Reason="", readiness=false. Elapsed: 458.720243ms
Aug 22 19:25:42.119: INFO: Pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463271717s
Aug 22 19:25:44.181: INFO: Pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47": Phase="Running", Reason="", readiness=true. Elapsed: 4.525919298s
Aug 22 19:25:46.186: INFO: Pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.531173237s
STEP: Saw pod success
Aug 22 19:25:46.187: INFO: Pod "downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47" satisfied condition "success or failure"
Aug 22 19:25:46.189: INFO: Trying to get logs from node jerma-worker pod downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47 container dapi-container: 
STEP: delete the pod
Aug 22 19:25:46.435: INFO: Waiting for pod downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47 to disappear
Aug 22 19:25:46.462: INFO: Pod downward-api-fc027837-4daa-4bf6-b195-ada0d31bba47 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:25:46.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3980" for this suite.

• [SLOW TEST:7.131 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:25:46.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 22 19:25:50.775: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 22 19:26:05.884: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:26:05.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4294" for this suite.

• [SLOW TEST:19.425 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":142,"skipped":2502,"failed":0}
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:26:05.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:26:06.070: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"05a609b6-b973-4db3-a102-651a3bf3ceeb", Controller:(*bool)(0xc00479efe2), BlockOwnerDeletion:(*bool)(0xc00479efe3)}}
Aug 22 19:26:06.079: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0a581c03-1c83-445a-aa20-c7854ec9934d", Controller:(*bool)(0xc002b6eb52), BlockOwnerDeletion:(*bool)(0xc002b6eb53)}}
Aug 22 19:26:06.108: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ee52592c-b7d0-4da4-b90a-0075a7637577", Controller:(*bool)(0xc00479f1da), BlockOwnerDeletion:(*bool)(0xc00479f1db)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:26:11.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3673" for this suite.

• [SLOW TEST:5.323 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":143,"skipped":2502,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:26:11.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:26:11.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4806" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":144,"skipped":2523,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:26:11.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 19:26:11.590: INFO: Waiting up to 5m0s for pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654" in namespace "downward-api-5284" to be "success or failure"
Aug 22 19:26:11.618: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Pending", Reason="", readiness=false. Elapsed: 27.798416ms
Aug 22 19:26:13.674: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083350341s
Aug 22 19:26:15.677: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087216111s
Aug 22 19:26:17.710: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119800422s
Aug 22 19:26:19.833: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242639521s
Aug 22 19:26:22.107: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Running", Reason="", readiness=true. Elapsed: 10.516953086s
Aug 22 19:26:24.111: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.52067305s
STEP: Saw pod success
Aug 22 19:26:24.111: INFO: Pod "downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654" satisfied condition "success or failure"
Aug 22 19:26:24.113: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654 container dapi-container: 
STEP: delete the pod
Aug 22 19:26:24.446: INFO: Waiting for pod downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654 to disappear
Aug 22 19:26:24.493: INFO: Pod downward-api-f735fb15-36bd-44ea-8233-0b5c897a3654 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:26:24.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5284" for this suite.

• [SLOW TEST:12.957 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:26:24.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-917 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-917;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-917 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-917;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-917.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-917.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-917.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-917.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.91.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.91.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.91.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.91.195_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-917 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-917;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-917 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-917;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-917.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-917.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-917.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-917.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.91.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.91.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.91.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.91.195_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 19:26:40.615: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.618: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.622: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.625: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.628: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.821: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.989: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.991: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.993: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.995: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:40.998: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:41.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:41.003: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:41.006: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:41.023: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:26:46.028: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.031: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.035: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.038: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.041: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.044: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.047: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.049: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.067: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.070: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.073: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.079: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.082: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.085: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.088: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:46.104: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:26:51.027: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.030: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.042: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.045: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.049: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.069: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.072: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.074: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.079: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.081: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.092: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.098: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:51.120: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:26:56.027: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.030: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.042: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.044: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.048: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.065: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.068: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.070: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.072: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.075: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.078: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.080: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.083: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:26:56.100: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:27:01.325: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.357: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.400: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.405: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.408: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.479: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.483: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.486: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.490: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.493: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.499: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:01.513: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:27:06.058: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.181: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.230: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.233: INFO: Unable to read wheezy_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.236: INFO: Unable to read wheezy_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.239: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.242: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.263: INFO: Unable to read jessie_udp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.265: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.267: INFO: Unable to read jessie_udp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.270: INFO: Unable to read jessie_tcp@dns-test-service.dns-917 from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.273: INFO: Unable to read jessie_udp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-917.svc from pod dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26: the server could not find the requested resource (get pods dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26)
Aug 22 19:27:06.297: INFO: Lookups using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-917 wheezy_tcp@dns-test-service.dns-917 wheezy_udp@dns-test-service.dns-917.svc wheezy_tcp@dns-test-service.dns-917.svc wheezy_udp@_http._tcp.dns-test-service.dns-917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-917 jessie_tcp@dns-test-service.dns-917 jessie_udp@dns-test-service.dns-917.svc jessie_tcp@dns-test-service.dns-917.svc jessie_udp@_http._tcp.dns-test-service.dns-917.svc jessie_tcp@_http._tcp.dns-test-service.dns-917.svc]

Aug 22 19:27:11.113: INFO: DNS probes using dns-917/dns-test-721fc008-71e6-43dc-9cc1-8cf159dc0b26 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:27:12.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-917" for this suite.

• [SLOW TEST:48.507 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":146,"skipped":2605,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:27:13.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-ce2727ef-0822-4a45-8efb-62007660c103 in namespace container-probe-1877
Aug 22 19:27:17.090: INFO: Started pod test-webserver-ce2727ef-0822-4a45-8efb-62007660c103 in namespace container-probe-1877
STEP: checking the pod's current state and verifying that restartCount is present
Aug 22 19:27:17.093: INFO: Initial restart count of pod test-webserver-ce2727ef-0822-4a45-8efb-62007660c103 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:31:17.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1877" for this suite.

• [SLOW TEST:246.309 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2629,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:31:19.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 22 19:31:19.980: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 22 19:31:21.374: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 22 19:31:24.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:31:26.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:31:28.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:31:30.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:31:32.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733721481, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:31:35.407: INFO: Waited 1.015996632s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:31:42.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1999" for this suite.

• [SLOW TEST:23.033 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":148,"skipped":2634,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:31:42.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:31:52.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5409" for this suite.

• [SLOW TEST:9.868 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2635,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:31:52.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 22 19:31:52.567: INFO: Waiting up to 5m0s for pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f" in namespace "emptydir-1820" to be "success or failure"
Aug 22 19:31:52.577: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.792504ms
Aug 22 19:31:54.595: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027584585s
Aug 22 19:31:56.865: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29775034s
Aug 22 19:31:59.202: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634612165s
Aug 22 19:32:01.350: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Running", Reason="", readiness=true. Elapsed: 8.782943925s
Aug 22 19:32:03.354: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.786598657s
STEP: Saw pod success
Aug 22 19:32:03.354: INFO: Pod "pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f" satisfied condition "success or failure"
Aug 22 19:32:03.357: INFO: Trying to get logs from node jerma-worker2 pod pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f container test-container: 
STEP: delete the pod
Aug 22 19:32:03.405: INFO: Waiting for pod pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f to disappear
Aug 22 19:32:03.458: INFO: Pod pod-97b9816d-76d1-4696-bbc0-a70c5ce9064f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:32:03.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1820" for this suite.

• [SLOW TEST:11.526 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2645,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:32:03.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:32:03.979: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 in namespace container-probe-2937
Aug 22 19:32:10.787: INFO: Started pod liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 in namespace container-probe-2937
STEP: checking the pod's current state and verifying that restartCount is present
Aug 22 19:32:10.789: INFO: Initial restart count of pod liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is 0
Aug 22 19:32:33.363: INFO: Restart count of pod container-probe-2937/liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is now 1 (22.574501176s elapsed)
Aug 22 19:32:53.590: INFO: Restart count of pod container-probe-2937/liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is now 2 (42.801423932s elapsed)
Aug 22 19:33:14.537: INFO: Restart count of pod container-probe-2937/liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is now 3 (1m3.748100952s elapsed)
Aug 22 19:33:32.804: INFO: Restart count of pod container-probe-2937/liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is now 4 (1m22.015314247s elapsed)
Aug 22 19:34:35.130: INFO: Restart count of pod container-probe-2937/liveness-6ca0a2ef-1716-40f1-853f-63a26021cf69 is now 5 (2m24.34081621s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:34:35.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2937" for this suite.

• [SLOW TEST:151.319 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2712,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:34:35.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-rwp7f in namespace proxy-1185
I0822 19:34:36.753749       6 runners.go:189] Created replication controller with name: proxy-service-rwp7f, namespace: proxy-1185, replica count: 1
I0822 19:34:37.804287       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:34:38.804536       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:34:39.804948       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:34:40.805214       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:34:41.805409       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 19:34:42.805643       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 19:34:43.805839       6 runners.go:189] proxy-service-rwp7f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 19:34:43.809: INFO: setup took 7.946462386s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 22 19:34:43.816: INFO: (0) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 7.384369ms)
Aug 22 19:34:43.816: INFO: (0) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 6.882031ms)
Aug 22 19:34:43.816: INFO: (0) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 7.577826ms)
Aug 22 19:34:43.818: INFO: (0) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 8.356896ms)
Aug 22 19:34:43.818: INFO: (0) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 9.134336ms)
Aug 22 19:34:43.818: INFO: (0) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 9.296079ms)
Aug 22 19:34:43.818: INFO: (0) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 9.212589ms)
Aug 22 19:34:43.818: INFO: (0) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 9.421141ms)
Aug 22 19:34:43.819: INFO: (0) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 9.044424ms)
Aug 22 19:34:43.819: INFO: (0) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 9.797188ms)
Aug 22 19:34:43.819: INFO: (0) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 9.962026ms)
Aug 22 19:34:43.826: INFO: (0) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 7.540959ms)
Aug 22 19:34:43.834: INFO: (1) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 7.382479ms)
Aug 22 19:34:43.834: INFO: (1) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 7.828063ms)
Aug 22 19:34:43.834: INFO: (1) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 7.533631ms)
Aug 22 19:34:43.834: INFO: (1) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test (200; 7.897017ms)
Aug 22 19:34:43.835: INFO: (1) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 7.876088ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 9.912139ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 10.119315ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 10.161422ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 10.01862ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 10.075201ms)
Aug 22 19:34:43.837: INFO: (1) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 10.031992ms)
Aug 22 19:34:43.842: INFO: (2) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.873556ms)
Aug 22 19:34:43.842: INFO: (2) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 5.017459ms)
Aug 22 19:34:43.843: INFO: (2) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 5.726874ms)
Aug 22 19:34:43.843: INFO: (2) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 6.191798ms)
Aug 22 19:34:43.843: INFO: (2) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 6.854207ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 6.939403ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 6.873497ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 6.843094ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 6.926746ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 6.933468ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 6.977244ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 7.100046ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 7.084566ms)
Aug 22 19:34:43.844: INFO: (2) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 7.03278ms)
Aug 22 19:34:43.848: INFO: (3) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test (200; 4.79395ms)
Aug 22 19:34:43.849: INFO: (3) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 4.836832ms)
Aug 22 19:34:43.849: INFO: (3) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 4.844785ms)
Aug 22 19:34:43.849: INFO: (3) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.849044ms)
Aug 22 19:34:43.849: INFO: (3) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.920056ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 5.496969ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 5.480651ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 5.531701ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 5.570459ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 5.583096ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 5.51075ms)
Aug 22 19:34:43.850: INFO: (3) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 5.71961ms)
Aug 22 19:34:43.851: INFO: (3) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 6.583054ms)
Aug 22 19:34:43.851: INFO: (3) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 6.724953ms)
Aug 22 19:34:43.854: INFO: (4) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.320437ms)
Aug 22 19:34:43.854: INFO: (4) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.341324ms)
Aug 22 19:34:43.854: INFO: (4) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.362596ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.505038ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.571357ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 4.598398ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 4.594112ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 4.692823ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.838465ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 4.965294ms)
Aug 22 19:34:43.856: INFO: (4) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 3.853892ms)
Aug 22 19:34:43.862: INFO: (5) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.258172ms)
Aug 22 19:34:43.862: INFO: (5) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.347266ms)
Aug 22 19:34:43.862: INFO: (5) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.374789ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 4.598196ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.604286ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.701659ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 4.650004ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.935435ms)
Aug 22 19:34:43.863: INFO: (5) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 5.448687ms)
Aug 22 19:34:43.870: INFO: (6) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 5.786945ms)
Aug 22 19:34:43.870: INFO: (6) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 5.739093ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 6.088238ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 6.515102ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 6.461124ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 6.390714ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 6.44951ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 6.563956ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 6.540588ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 6.470444ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 6.531848ms)
Aug 22 19:34:43.871: INFO: (6) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 6.698862ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 7.27275ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 7.327832ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 7.28666ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 7.34737ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 7.33706ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 7.997847ms)
Aug 22 19:34:43.879: INFO: (7) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 8.02323ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 7.967777ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 8.140349ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 8.174297ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 8.266839ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 8.374493ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 8.373088ms)
Aug 22 19:34:43.880: INFO: (7) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 9.139461ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.753103ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.812834ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 4.763158ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 4.79162ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.810911ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 5.122464ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 5.229281ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 5.216596ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 5.280794ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 5.307767ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 5.245824ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 5.276742ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 5.296662ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 5.25972ms)
Aug 22 19:34:43.886: INFO: (8) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 5.454995ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 3.225023ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.42257ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.409949ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 3.361964ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 3.492194ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.681368ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 3.723111ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.753692ms)
Aug 22 19:34:43.890: INFO: (9) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 3.672817ms)
Aug 22 19:34:43.891: INFO: (9) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 4.426945ms)
Aug 22 19:34:43.891: INFO: (9) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 5.057238ms)
Aug 22 19:34:43.891: INFO: (9) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 5.145833ms)
Aug 22 19:34:43.892: INFO: (9) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 5.229404ms)
Aug 22 19:34:43.892: INFO: (9) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 5.284962ms)
Aug 22 19:34:43.892: INFO: (9) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 5.290099ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.404546ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 3.438628ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 3.369731ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 3.400709ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.393426ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.484576ms)
Aug 22 19:34:43.895: INFO: (10) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 3.667429ms)
Aug 22 19:34:43.896: INFO: (10) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 3.7943ms)
Aug 22 19:34:43.896: INFO: (10) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 3.798478ms)
Aug 22 19:34:43.896: INFO: (10) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 2.710076ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 2.977697ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 3.180014ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 3.203101ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.212177ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.247973ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.277119ms)
Aug 22 19:34:43.900: INFO: (11) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.303546ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.020385ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 4.026053ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 4.176952ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 4.140659ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 4.106677ms)
Aug 22 19:34:43.901: INFO: (11) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 3.092538ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 3.64741ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 3.722543ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 3.915669ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.021206ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.043189ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.118497ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.072356ms)
Aug 22 19:34:43.905: INFO: (12) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 3.2421ms)
Aug 22 19:34:43.910: INFO: (13) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 3.52626ms)
Aug 22 19:34:43.910: INFO: (13) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test (200; 4.328513ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 4.339269ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.357491ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.414413ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 4.38034ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 4.547541ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.613226ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 4.680715ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 4.609188ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 4.784556ms)
Aug 22 19:34:43.911: INFO: (13) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.842111ms)
Aug 22 19:34:43.914: INFO: (14) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 2.505542ms)
Aug 22 19:34:43.915: INFO: (14) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.539517ms)
Aug 22 19:34:43.915: INFO: (14) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.793891ms)
Aug 22 19:34:43.916: INFO: (14) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 3.910645ms)
Aug 22 19:34:43.916: INFO: (14) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 3.807018ms)
Aug 22 19:34:43.916: INFO: (14) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.090562ms)
Aug 22 19:34:43.916: INFO: (14) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 4.315059ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 4.833783ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 4.834654ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 4.868295ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 4.955981ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.866018ms)
Aug 22 19:34:43.917: INFO: (14) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test (200; 2.711757ms)
Aug 22 19:34:43.921: INFO: (15) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 2.798228ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 3.819564ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 4.041004ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 4.161279ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 4.265193ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 4.077212ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.244178ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 4.308378ms)
Aug 22 19:34:43.922: INFO: (15) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 2.977804ms)
Aug 22 19:34:43.925: INFO: (16) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 2.983447ms)
Aug 22 19:34:43.926: INFO: (16) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 3.471286ms)
Aug 22 19:34:43.926: INFO: (16) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.637819ms)
Aug 22 19:34:43.926: INFO: (16) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test (200; 3.904097ms)
Aug 22 19:34:43.926: INFO: (16) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 3.891899ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 4.135282ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 4.599688ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 4.653596ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 4.704817ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:460/proxy/: tls baz (200; 4.803366ms)
Aug 22 19:34:43.927: INFO: (16) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 4.867711ms)
Aug 22 19:34:43.931: INFO: (17) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.719528ms)
Aug 22 19:34:43.931: INFO: (17) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 3.745923ms)
Aug 22 19:34:43.931: INFO: (17) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 3.774026ms)
Aug 22 19:34:43.931: INFO: (17) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 3.804141ms)
Aug 22 19:34:43.931: INFO: (17) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: test<... (200; 4.797477ms)
Aug 22 19:34:43.932: INFO: (17) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 4.800635ms)
Aug 22 19:34:43.935: INFO: (18) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 2.818735ms)
Aug 22 19:34:43.935: INFO: (18) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 2.906086ms)
Aug 22 19:34:43.936: INFO: (18) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 3.674891ms)
Aug 22 19:34:43.936: INFO: (18) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname1/proxy/: tls baz (200; 3.881237ms)
Aug 22 19:34:43.936: INFO: (18) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 3.920491ms)
Aug 22 19:34:43.936: INFO: (18) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 3.996964ms)
Aug 22 19:34:43.937: INFO: (18) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 4.099105ms)
Aug 22 19:34:43.937: INFO: (18) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: ... (200; 4.388697ms)
Aug 22 19:34:43.937: INFO: (18) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 4.492817ms)
Aug 22 19:34:43.937: INFO: (18) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 4.428404ms)
Aug 22 19:34:43.937: INFO: (18) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:162/proxy/: bar (200; 4.460773ms)
Aug 22 19:34:43.945: INFO: (19) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:160/proxy/: foo (200; 7.1489ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/pods/http:proxy-service-rwp7f-nhp4n:1080/proxy/: ... (200; 7.604445ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname2/proxy/: bar (200; 8.865645ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:462/proxy/: tls qux (200; 8.622219ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname2/proxy/: bar (200; 8.946918ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/services/proxy-service-rwp7f:portname1/proxy/: foo (200; 8.609712ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/services/http:proxy-service-rwp7f:portname1/proxy/: foo (200; 8.249912ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n/proxy/: test (200; 8.541559ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/services/https:proxy-service-rwp7f:tlsportname2/proxy/: tls qux (200; 8.937008ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/pods/proxy-service-rwp7f-nhp4n:1080/proxy/: test<... (200; 8.907322ms)
Aug 22 19:34:43.946: INFO: (19) /api/v1/namespaces/proxy-1185/pods/https:proxy-service-rwp7f-nhp4n:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-edfc6a07-89d5-499b-ba64-917d7b53b6bf
STEP: Creating configMap with name cm-test-opt-upd-f4e9e728-63e2-4b88-a6b5-3ddd34496bd5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-edfc6a07-89d5-499b-ba64-917d7b53b6bf
STEP: Updating configmap cm-test-opt-upd-f4e9e728-63e2-4b88-a6b5-3ddd34496bd5
STEP: Creating configMap with name cm-test-opt-create-638431b4-1881-4c54-b508-f7ae7d6fac06
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:36:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5450" for this suite.

• [SLOW TEST:75.380 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2731,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:36:07.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-5tf4
STEP: Creating a pod to test atomic-volume-subpath
Aug 22 19:36:07.414: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5tf4" in namespace "subpath-2932" to be "success or failure"
Aug 22 19:36:07.421: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.033451ms
Aug 22 19:36:09.454: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040032576s
Aug 22 19:36:11.987: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 4.573341145s
Aug 22 19:36:13.991: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 6.577131137s
Aug 22 19:36:15.994: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 8.580292717s
Aug 22 19:36:17.998: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 10.583912422s
Aug 22 19:36:20.002: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 12.588116355s
Aug 22 19:36:22.006: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 14.59201165s
Aug 22 19:36:24.010: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 16.595686234s
Aug 22 19:36:26.014: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 18.5995685s
Aug 22 19:36:28.047: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 20.632761016s
Aug 22 19:36:30.050: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Running", Reason="", readiness=true. Elapsed: 22.636057982s
Aug 22 19:36:32.053: INFO: Pod "pod-subpath-test-configmap-5tf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.639061105s
STEP: Saw pod success
Aug 22 19:36:32.053: INFO: Pod "pod-subpath-test-configmap-5tf4" satisfied condition "success or failure"
Aug 22 19:36:32.056: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-5tf4 container test-container-subpath-configmap-5tf4: 
STEP: delete the pod
Aug 22 19:36:32.600: INFO: Waiting for pod pod-subpath-test-configmap-5tf4 to disappear
Aug 22 19:36:32.658: INFO: Pod pod-subpath-test-configmap-5tf4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5tf4
Aug 22 19:36:32.658: INFO: Deleting pod "pod-subpath-test-configmap-5tf4" in namespace "subpath-2932"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:36:32.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2932" for this suite.

• [SLOW TEST:25.444 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":155,"skipped":2741,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:36:32.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-383680d2-7f17-42e1-b944-4ad6db85edb1
STEP: Creating a pod to test consume configMaps
Aug 22 19:36:33.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631" in namespace "configmap-7174" to be "success or failure"
Aug 22 19:36:34.952: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631": Phase="Pending", Reason="", readiness=false. Elapsed: 1.266946638s
Aug 22 19:36:37.005: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631": Phase="Pending", Reason="", readiness=false. Elapsed: 3.320518215s
Aug 22 19:36:39.065: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631": Phase="Pending", Reason="", readiness=false. Elapsed: 5.380385328s
Aug 22 19:36:41.185: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631": Phase="Pending", Reason="", readiness=false. Elapsed: 7.500490418s
Aug 22 19:36:43.189: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.503769144s
STEP: Saw pod success
Aug 22 19:36:43.189: INFO: Pod "pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631" satisfied condition "success or failure"
Aug 22 19:36:43.191: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631 container configmap-volume-test: 
STEP: delete the pod
Aug 22 19:36:43.289: INFO: Waiting for pod pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631 to disappear
Aug 22 19:36:43.329: INFO: Pod pod-configmaps-ff6db951-ef6f-431d-8328-8bc6b3893631 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:36:43.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7174" for this suite.

• [SLOW TEST:10.671 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2756,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:36:43.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 22 19:36:52.709: INFO: Successfully updated pod "adopt-release-wxvks"
STEP: Checking that the Job readopts the Pod
Aug 22 19:36:52.709: INFO: Waiting up to 15m0s for pod "adopt-release-wxvks" in namespace "job-5482" to be "adopted"
Aug 22 19:36:52.716: INFO: Pod "adopt-release-wxvks": Phase="Running", Reason="", readiness=true. Elapsed: 7.032257ms
Aug 22 19:36:54.719: INFO: Pod "adopt-release-wxvks": Phase="Running", Reason="", readiness=true. Elapsed: 2.010485029s
Aug 22 19:36:54.719: INFO: Pod "adopt-release-wxvks" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 22 19:36:55.231: INFO: Successfully updated pod "adopt-release-wxvks"
STEP: Checking that the Job releases the Pod
Aug 22 19:36:55.231: INFO: Waiting up to 15m0s for pod "adopt-release-wxvks" in namespace "job-5482" to be "released"
Aug 22 19:36:55.356: INFO: Pod "adopt-release-wxvks": Phase="Running", Reason="", readiness=true. Elapsed: 125.352868ms
Aug 22 19:36:55.356: INFO: Pod "adopt-release-wxvks" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:36:55.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5482" for this suite.

• [SLOW TEST:12.159 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":157,"skipped":2765,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:36:55.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 22 19:36:55.895: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 22 19:37:01.118: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:02.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5432" for this suite.

• [SLOW TEST:6.948 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":158,"skipped":2770,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:02.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 22 19:37:12.959: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:12.968: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 22 19:37:14.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:14.971: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 22 19:37:16.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:16.972: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 22 19:37:18.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:18.972: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 22 19:37:20.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:20.973: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 22 19:37:22.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 22 19:37:22.972: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:22.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4691" for this suite.

• [SLOW TEST:20.539 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2772,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:22.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 22 19:37:23.509: INFO: Waiting up to 5m0s for pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da" in namespace "emptydir-3061" to be "success or failure"
Aug 22 19:37:23.519: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da": Phase="Pending", Reason="", readiness=false. Elapsed: 9.937766ms
Aug 22 19:37:25.523: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014126536s
Aug 22 19:37:27.627: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11856702s
Aug 22 19:37:29.761: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da": Phase="Running", Reason="", readiness=true. Elapsed: 6.251932087s
Aug 22 19:37:31.764: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.255156122s
STEP: Saw pod success
Aug 22 19:37:31.764: INFO: Pod "pod-05954537-b66e-44e4-9114-4abcc5f6a8da" satisfied condition "success or failure"
Aug 22 19:37:31.767: INFO: Trying to get logs from node jerma-worker pod pod-05954537-b66e-44e4-9114-4abcc5f6a8da container test-container: 
STEP: delete the pod
Aug 22 19:37:31.796: INFO: Waiting for pod pod-05954537-b66e-44e4-9114-4abcc5f6a8da to disappear
Aug 22 19:37:31.799: INFO: Pod pod-05954537-b66e-44e4-9114-4abcc5f6a8da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:31.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3061" for this suite.

• [SLOW TEST:8.820 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2773,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:31.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:39.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4677" for this suite.

• [SLOW TEST:7.520 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":161,"skipped":2775,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:39.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:37:39.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c" in namespace "downward-api-3709" to be "success or failure"
Aug 22 19:37:39.540: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.821784ms
Aug 22 19:37:41.672: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184016415s
Aug 22 19:37:43.749: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260554365s
Aug 22 19:37:46.012: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.523554913s
Aug 22 19:37:48.023: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534731063s
STEP: Saw pod success
Aug 22 19:37:48.023: INFO: Pod "downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c" satisfied condition "success or failure"
Aug 22 19:37:48.026: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c container client-container: 
STEP: delete the pod
Aug 22 19:37:48.187: INFO: Waiting for pod downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c to disappear
Aug 22 19:37:48.209: INFO: Pod downwardapi-volume-f0b07ca3-0e1d-4368-9174-8edb1f3d2b5c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:48.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3709" for this suite.

• [SLOW TEST:8.890 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2785,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:48.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 19:37:49.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5" in namespace "downward-api-8886" to be "success or failure"
Aug 22 19:37:49.131: INFO: Pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 73.670765ms
Aug 22 19:37:51.136: INFO: Pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078530944s
Aug 22 19:37:53.168: INFO: Pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11030738s
Aug 22 19:37:55.172: INFO: Pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11426281s
STEP: Saw pod success
Aug 22 19:37:55.172: INFO: Pod "downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5" satisfied condition "success or failure"
Aug 22 19:37:55.175: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5 container client-container: 
STEP: delete the pod
Aug 22 19:37:55.825: INFO: Waiting for pod downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5 to disappear
Aug 22 19:37:56.161: INFO: Pod downwardapi-volume-78e89835-8204-45a6-aec9-d171a238e4b5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:37:56.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8886" for this suite.

• [SLOW TEST:7.970 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2814,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:37:56.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1557
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1557
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1557
Aug 22 19:37:56.372: INFO: Found 0 stateful pods, waiting for 1
Aug 22 19:38:06.431: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 22 19:38:06.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:38:14.470: INFO: stderr: "I0822 19:38:14.162946    2486 log.go:172] (0xc0009cad10) (0xc0009de0a0) Create stream\nI0822 19:38:14.162982    2486 log.go:172] (0xc0009cad10) (0xc0009de0a0) Stream added, broadcasting: 1\nI0822 19:38:14.165563    2486 log.go:172] (0xc0009cad10) Reply frame received for 1\nI0822 19:38:14.165607    2486 log.go:172] (0xc0009cad10) (0xc00095e140) Create stream\nI0822 19:38:14.165616    2486 log.go:172] (0xc0009cad10) (0xc00095e140) Stream added, broadcasting: 3\nI0822 19:38:14.166405    2486 log.go:172] (0xc0009cad10) Reply frame received for 3\nI0822 19:38:14.166433    2486 log.go:172] (0xc0009cad10) (0xc0009de140) Create stream\nI0822 19:38:14.166440    2486 log.go:172] (0xc0009cad10) (0xc0009de140) Stream added, broadcasting: 5\nI0822 19:38:14.167281    2486 log.go:172] (0xc0009cad10) Reply frame received for 5\nI0822 19:38:14.256680    2486 log.go:172] (0xc0009cad10) Data frame received for 5\nI0822 19:38:14.256703    2486 log.go:172] (0xc0009de140) (5) Data frame handling\nI0822 19:38:14.256716    2486 log.go:172] (0xc0009de140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:38:14.454573    2486 log.go:172] (0xc0009cad10) Data frame received for 3\nI0822 19:38:14.454606    2486 log.go:172] (0xc00095e140) (3) Data frame handling\nI0822 19:38:14.454627    2486 log.go:172] (0xc00095e140) (3) Data frame sent\nI0822 19:38:14.454635    2486 log.go:172] (0xc0009cad10) Data frame received for 3\nI0822 19:38:14.454641    2486 log.go:172] (0xc00095e140) (3) Data frame handling\nI0822 19:38:14.454921    2486 log.go:172] (0xc0009cad10) Data frame received for 5\nI0822 19:38:14.454951    2486 log.go:172] (0xc0009de140) (5) Data frame handling\nI0822 19:38:14.457025    2486 log.go:172] (0xc0009cad10) Data frame received for 1\nI0822 19:38:14.457064    2486 log.go:172] (0xc0009de0a0) (1) Data frame handling\nI0822 19:38:14.457090    2486 log.go:172] (0xc0009de0a0) (1) Data frame sent\nI0822 19:38:14.457121    2486 log.go:172] (0xc0009cad10) (0xc0009de0a0) Stream removed, broadcasting: 1\nI0822 19:38:14.457270    2486 log.go:172] (0xc0009cad10) Go away received\nI0822 19:38:14.457608    2486 log.go:172] (0xc0009cad10) (0xc0009de0a0) Stream removed, broadcasting: 1\nI0822 19:38:14.457630    2486 log.go:172] (0xc0009cad10) (0xc00095e140) Stream removed, broadcasting: 3\nI0822 19:38:14.457641    2486 log.go:172] (0xc0009cad10) (0xc0009de140) Stream removed, broadcasting: 5\n"
Aug 22 19:38:14.470: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:38:14.470: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:38:14.473: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 22 19:38:24.477: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:38:24.477: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:38:24.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999456s
Aug 22 19:38:25.566: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.921727356s
Aug 22 19:38:26.587: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.918874581s
Aug 22 19:38:27.591: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.89776744s
Aug 22 19:38:28.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.89390924s
Aug 22 19:38:29.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.890619915s
Aug 22 19:38:30.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.88648496s
Aug 22 19:38:32.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.855922118s
Aug 22 19:38:33.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.309769774s
Aug 22 19:38:34.184: INFO: Verifying statefulset ss doesn't scale past 1 for another 305.763629ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1557
Aug 22 19:38:35.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:38:35.452: INFO: stderr: "I0822 19:38:35.317605    2519 log.go:172] (0xc0005cea50) (0xc000665ae0) Create stream\nI0822 19:38:35.317662    2519 log.go:172] (0xc0005cea50) (0xc000665ae0) Stream added, broadcasting: 1\nI0822 19:38:35.319317    2519 log.go:172] (0xc0005cea50) Reply frame received for 1\nI0822 19:38:35.319351    2519 log.go:172] (0xc0005cea50) (0xc0005ca000) Create stream\nI0822 19:38:35.319360    2519 log.go:172] (0xc0005cea50) (0xc0005ca000) Stream added, broadcasting: 3\nI0822 19:38:35.319937    2519 log.go:172] (0xc0005cea50) Reply frame received for 3\nI0822 19:38:35.319959    2519 log.go:172] (0xc0005cea50) (0xc000665cc0) Create stream\nI0822 19:38:35.319965    2519 log.go:172] (0xc0005cea50) (0xc000665cc0) Stream added, broadcasting: 5\nI0822 19:38:35.320477    2519 log.go:172] (0xc0005cea50) Reply frame received for 5\nI0822 19:38:35.377457    2519 log.go:172] (0xc0005cea50) Data frame received for 5\nI0822 19:38:35.377485    2519 log.go:172] (0xc000665cc0) (5) Data frame handling\nI0822 19:38:35.377505    2519 log.go:172] (0xc000665cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:38:35.439830    2519 log.go:172] (0xc0005cea50) Data frame received for 3\nI0822 19:38:35.439860    2519 log.go:172] (0xc0005ca000) (3) Data frame handling\nI0822 19:38:35.439884    2519 log.go:172] (0xc0005ca000) (3) Data frame sent\nI0822 19:38:35.440083    2519 log.go:172] (0xc0005cea50) Data frame received for 3\nI0822 19:38:35.440123    2519 log.go:172] (0xc0005ca000) (3) Data frame handling\nI0822 19:38:35.440142    2519 log.go:172] (0xc0005cea50) Data frame received for 5\nI0822 19:38:35.440156    2519 log.go:172] (0xc000665cc0) (5) Data frame handling\nI0822 19:38:35.442050    2519 log.go:172] (0xc0005cea50) Data frame received for 1\nI0822 19:38:35.442089    2519 log.go:172] (0xc000665ae0) (1) Data frame handling\nI0822 19:38:35.442132    2519 log.go:172] (0xc000665ae0) (1) Data frame sent\nI0822 19:38:35.442175    2519 log.go:172] (0xc0005cea50) (0xc000665ae0) Stream removed, broadcasting: 1\nI0822 19:38:35.442224    2519 log.go:172] (0xc0005cea50) Go away received\nI0822 19:38:35.442564    2519 log.go:172] (0xc0005cea50) (0xc000665ae0) Stream removed, broadcasting: 1\nI0822 19:38:35.442585    2519 log.go:172] (0xc0005cea50) (0xc0005ca000) Stream removed, broadcasting: 3\nI0822 19:38:35.442594    2519 log.go:172] (0xc0005cea50) (0xc000665cc0) Stream removed, broadcasting: 5\n"
Aug 22 19:38:35.452: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:38:35.452: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:38:35.455: INFO: Found 1 stateful pods, waiting for 3
Aug 22 19:38:45.460: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:38:45.460: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:38:45.460: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 22 19:38:55.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:38:55.459: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:38:55.459: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 22 19:38:55.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:38:55.673: INFO: stderr: "I0822 19:38:55.596650    2540 log.go:172] (0xc000aeedc0) (0xc0006ba1e0) Create stream\nI0822 19:38:55.596706    2540 log.go:172] (0xc000aeedc0) (0xc0006ba1e0) Stream added, broadcasting: 1\nI0822 19:38:55.599890    2540 log.go:172] (0xc000aeedc0) Reply frame received for 1\nI0822 19:38:55.599922    2540 log.go:172] (0xc000aeedc0) (0xc00074a000) Create stream\nI0822 19:38:55.599931    2540 log.go:172] (0xc000aeedc0) (0xc00074a000) Stream added, broadcasting: 3\nI0822 19:38:55.600909    2540 log.go:172] (0xc000aeedc0) Reply frame received for 3\nI0822 19:38:55.600937    2540 log.go:172] (0xc000aeedc0) (0xc0008c0280) Create stream\nI0822 19:38:55.600959    2540 log.go:172] (0xc000aeedc0) (0xc0008c0280) Stream added, broadcasting: 5\nI0822 19:38:55.601873    2540 log.go:172] (0xc000aeedc0) Reply frame received for 5\nI0822 19:38:55.666888    2540 log.go:172] (0xc000aeedc0) Data frame received for 3\nI0822 19:38:55.666928    2540 log.go:172] (0xc00074a000) (3) Data frame handling\nI0822 19:38:55.666940    2540 log.go:172] (0xc00074a000) (3) Data frame sent\nI0822 19:38:55.666973    2540 log.go:172] (0xc000aeedc0) Data frame received for 5\nI0822 19:38:55.667020    2540 log.go:172] (0xc0008c0280) (5) Data frame handling\nI0822 19:38:55.667031    2540 log.go:172] (0xc0008c0280) (5) Data frame sent\nI0822 19:38:55.667040    2540 log.go:172] (0xc000aeedc0) Data frame received for 5\nI0822 19:38:55.667046    2540 log.go:172] (0xc0008c0280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:38:55.667069    2540 log.go:172] (0xc000aeedc0) Data frame received for 3\nI0822 19:38:55.667076    2540 log.go:172] (0xc00074a000) (3) Data frame handling\nI0822 19:38:55.668131    2540 log.go:172] (0xc000aeedc0) Data frame received for 1\nI0822 19:38:55.668150    2540 log.go:172] (0xc0006ba1e0) (1) Data frame handling\nI0822 19:38:55.668156    2540 log.go:172] (0xc0006ba1e0) (1) Data frame sent\nI0822 19:38:55.668165    2540 log.go:172] (0xc000aeedc0) (0xc0006ba1e0) Stream removed, broadcasting: 1\nI0822 19:38:55.668188    2540 log.go:172] (0xc000aeedc0) Go away received\nI0822 19:38:55.668427    2540 log.go:172] (0xc000aeedc0) (0xc0006ba1e0) Stream removed, broadcasting: 1\nI0822 19:38:55.668440    2540 log.go:172] (0xc000aeedc0) (0xc00074a000) Stream removed, broadcasting: 3\nI0822 19:38:55.668445    2540 log.go:172] (0xc000aeedc0) (0xc0008c0280) Stream removed, broadcasting: 5\n"
Aug 22 19:38:55.673: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:38:55.673: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:38:55.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:38:55.984: INFO: stderr: "I0822 19:38:55.833546    2558 log.go:172] (0xc0000f8370) (0xc000978000) Create stream\nI0822 19:38:55.833634    2558 log.go:172] (0xc0000f8370) (0xc000978000) Stream added, broadcasting: 1\nI0822 19:38:55.835463    2558 log.go:172] (0xc0000f8370) Reply frame received for 1\nI0822 19:38:55.835486    2558 log.go:172] (0xc0000f8370) (0xc0007ce820) Create stream\nI0822 19:38:55.835493    2558 log.go:172] (0xc0000f8370) (0xc0007ce820) Stream added, broadcasting: 3\nI0822 19:38:55.836305    2558 log.go:172] (0xc0000f8370) Reply frame received for 3\nI0822 19:38:55.836328    2558 log.go:172] (0xc0000f8370) (0xc0007c0000) Create stream\nI0822 19:38:55.836335    2558 log.go:172] (0xc0000f8370) (0xc0007c0000) Stream added, broadcasting: 5\nI0822 19:38:55.837400    2558 log.go:172] (0xc0000f8370) Reply frame received for 5\nI0822 19:38:55.907479    2558 log.go:172] (0xc0000f8370) Data frame received for 5\nI0822 19:38:55.907501    2558 log.go:172] (0xc0007c0000) (5) Data frame handling\nI0822 19:38:55.907519    2558 log.go:172] (0xc0007c0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:38:55.971549    2558 log.go:172] (0xc0000f8370) Data frame received for 3\nI0822 19:38:55.971589    2558 log.go:172] (0xc0007ce820) (3) Data frame handling\nI0822 19:38:55.971622    2558 log.go:172] (0xc0007ce820) (3) Data frame sent\nI0822 19:38:55.972149    2558 log.go:172] (0xc0000f8370) Data frame received for 3\nI0822 19:38:55.972162    2558 log.go:172] (0xc0007ce820) (3) Data frame handling\nI0822 19:38:55.972428    2558 log.go:172] (0xc0000f8370) Data frame received for 5\nI0822 19:38:55.972453    2558 log.go:172] (0xc0007c0000) (5) Data frame handling\nI0822 19:38:55.974308    2558 log.go:172] (0xc0000f8370) Data frame received for 1\nI0822 19:38:55.974332    2558 log.go:172] (0xc000978000) (1) Data frame handling\nI0822 19:38:55.974344    2558 log.go:172] (0xc000978000) (1) Data frame sent\nI0822 19:38:55.974357    2558 log.go:172] (0xc0000f8370) (0xc000978000) Stream removed, broadcasting: 1\nI0822 19:38:55.974504    2558 log.go:172] (0xc0000f8370) Go away received\nI0822 19:38:55.974757    2558 log.go:172] (0xc0000f8370) (0xc000978000) Stream removed, broadcasting: 1\nI0822 19:38:55.974784    2558 log.go:172] (0xc0000f8370) (0xc0007ce820) Stream removed, broadcasting: 3\nI0822 19:38:55.974808    2558 log.go:172] (0xc0000f8370) (0xc0007c0000) Stream removed, broadcasting: 5\n"
Aug 22 19:38:55.984: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:38:55.984: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:38:55.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:38:56.257: INFO: stderr: "I0822 19:38:56.100661    2576 log.go:172] (0xc0009e6e70) (0xc000a2e5a0) Create stream\nI0822 19:38:56.100714    2576 log.go:172] (0xc0009e6e70) (0xc000a2e5a0) Stream added, broadcasting: 1\nI0822 19:38:56.102465    2576 log.go:172] (0xc0009e6e70) Reply frame received for 1\nI0822 19:38:56.102507    2576 log.go:172] (0xc0009e6e70) (0xc0009c4000) Create stream\nI0822 19:38:56.102517    2576 log.go:172] (0xc0009e6e70) (0xc0009c4000) Stream added, broadcasting: 3\nI0822 19:38:56.103332    2576 log.go:172] (0xc0009e6e70) Reply frame received for 3\nI0822 19:38:56.103377    2576 log.go:172] (0xc0009e6e70) (0xc000bac140) Create stream\nI0822 19:38:56.103388    2576 log.go:172] (0xc0009e6e70) (0xc000bac140) Stream added, broadcasting: 5\nI0822 19:38:56.104379    2576 log.go:172] (0xc0009e6e70) Reply frame received for 5\nI0822 19:38:56.170951    2576 log.go:172] (0xc0009e6e70) Data frame received for 5\nI0822 19:38:56.170978    2576 log.go:172] (0xc000bac140) (5) Data frame handling\nI0822 19:38:56.170997    2576 log.go:172] (0xc000bac140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:38:56.246425    2576 log.go:172] (0xc0009e6e70) Data frame received for 3\nI0822 19:38:56.246471    2576 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0822 19:38:56.246484    2576 log.go:172] (0xc0009c4000) (3) Data frame sent\nI0822 19:38:56.246504    2576 log.go:172] (0xc0009e6e70) Data frame received for 3\nI0822 19:38:56.246514    2576 log.go:172] (0xc0009c4000) (3) Data frame handling\nI0822 19:38:56.246554    2576 log.go:172] (0xc0009e6e70) Data frame received for 5\nI0822 19:38:56.246576    2576 log.go:172] (0xc000bac140) (5) Data frame handling\nI0822 19:38:56.249062    2576 log.go:172] (0xc0009e6e70) Data frame received for 1\nI0822 19:38:56.249088    2576 log.go:172] (0xc000a2e5a0) (1) Data frame handling\nI0822 19:38:56.249116    2576 log.go:172] (0xc000a2e5a0) (1) Data frame sent\nI0822 19:38:56.249138    2576 log.go:172] (0xc0009e6e70) (0xc000a2e5a0) Stream removed, broadcasting: 1\nI0822 19:38:56.249159    2576 log.go:172] (0xc0009e6e70) Go away received\nI0822 19:38:56.249519    2576 log.go:172] (0xc0009e6e70) (0xc000a2e5a0) Stream removed, broadcasting: 1\nI0822 19:38:56.249545    2576 log.go:172] (0xc0009e6e70) (0xc0009c4000) Stream removed, broadcasting: 3\nI0822 19:38:56.249555    2576 log.go:172] (0xc0009e6e70) (0xc000bac140) Stream removed, broadcasting: 5\n"
Aug 22 19:38:56.258: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:38:56.258: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:38:56.258: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:38:56.260: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug 22 19:39:06.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:39:06.267: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:39:06.267: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:39:06.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999511s
Aug 22 19:39:07.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97602003s
Aug 22 19:39:08.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.797080123s
Aug 22 19:39:09.822: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.485496942s
Aug 22 19:39:10.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.449852582s
Aug 22 19:39:12.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.37739343s
Aug 22 19:39:13.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.247719153s
Aug 22 19:39:14.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.234172451s
Aug 22 19:39:15.187: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.115268521s
Aug 22 19:39:16.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 84.558589ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1557
Aug 22 19:39:17.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:39:17.513: INFO: stderr: "I0822 19:39:17.426115    2595 log.go:172] (0xc0000f7340) (0xc00072da40) Create stream\nI0822 19:39:17.426185    2595 log.go:172] (0xc0000f7340) (0xc00072da40) Stream added, broadcasting: 1\nI0822 19:39:17.429136    2595 log.go:172] (0xc0000f7340) Reply frame received for 1\nI0822 19:39:17.429169    2595 log.go:172] (0xc0000f7340) (0xc0008d2000) Create stream\nI0822 19:39:17.429179    2595 log.go:172] (0xc0000f7340) (0xc0008d2000) Stream added, broadcasting: 3\nI0822 19:39:17.429904    2595 log.go:172] (0xc0000f7340) Reply frame received for 3\nI0822 19:39:17.429933    2595 log.go:172] (0xc0000f7340) (0xc00072dc20) Create stream\nI0822 19:39:17.429952    2595 log.go:172] (0xc0000f7340) (0xc00072dc20) Stream added, broadcasting: 5\nI0822 19:39:17.430733    2595 log.go:172] (0xc0000f7340) Reply frame received for 5\nI0822 19:39:17.503457    2595 log.go:172] (0xc0000f7340) Data frame received for 3\nI0822 19:39:17.503518    2595 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0822 19:39:17.503535    2595 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0822 19:39:17.503552    2595 log.go:172] (0xc0000f7340) Data frame received for 3\nI0822 19:39:17.503574    2595 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0822 19:39:17.503620    2595 log.go:172] (0xc0000f7340) Data frame received for 5\nI0822 19:39:17.503661    2595 log.go:172] (0xc00072dc20) (5) Data frame handling\nI0822 19:39:17.503703    2595 log.go:172] (0xc00072dc20) (5) Data frame sent\nI0822 19:39:17.503719    2595 log.go:172] (0xc0000f7340) Data frame received for 5\nI0822 19:39:17.503730    2595 log.go:172] (0xc00072dc20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:39:17.505333    2595 log.go:172] (0xc0000f7340) Data frame received for 1\nI0822 19:39:17.505360    2595 log.go:172] (0xc00072da40) (1) Data frame handling\nI0822 19:39:17.505376    2595 log.go:172] (0xc00072da40) (1) Data frame sent\nI0822 19:39:17.505390    2595 log.go:172] (0xc0000f7340) (0xc00072da40) Stream removed, broadcasting: 1\nI0822 19:39:17.505408    2595 log.go:172] (0xc0000f7340) Go away received\nI0822 19:39:17.505858    2595 log.go:172] (0xc0000f7340) (0xc00072da40) Stream removed, broadcasting: 1\nI0822 19:39:17.505879    2595 log.go:172] (0xc0000f7340) (0xc0008d2000) Stream removed, broadcasting: 3\nI0822 19:39:17.505891    2595 log.go:172] (0xc0000f7340) (0xc00072dc20) Stream removed, broadcasting: 5\n"
Aug 22 19:39:17.513: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:39:17.513: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:39:17.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:39:17.742: INFO: stderr: "I0822 19:39:17.667133    2617 log.go:172] (0xc00068c790) (0xc000688000) Create stream\nI0822 19:39:17.667207    2617 log.go:172] (0xc00068c790) (0xc000688000) Stream added, broadcasting: 1\nI0822 19:39:17.669451    2617 log.go:172] (0xc00068c790) Reply frame received for 1\nI0822 19:39:17.669503    2617 log.go:172] (0xc00068c790) (0xc00071bb80) Create stream\nI0822 19:39:17.669520    2617 log.go:172] (0xc00068c790) (0xc00071bb80) Stream added, broadcasting: 3\nI0822 19:39:17.670489    2617 log.go:172] (0xc00068c790) Reply frame received for 3\nI0822 19:39:17.670524    2617 log.go:172] (0xc00068c790) (0xc00071bd60) Create stream\nI0822 19:39:17.670543    2617 log.go:172] (0xc00068c790) (0xc00071bd60) Stream added, broadcasting: 5\nI0822 19:39:17.671343    2617 log.go:172] (0xc00068c790) Reply frame received for 5\nI0822 19:39:17.726906    2617 log.go:172] (0xc00068c790) Data frame received for 5\nI0822 19:39:17.726963    2617 log.go:172] (0xc00071bd60) (5) Data frame handling\nI0822 19:39:17.726986    2617 log.go:172] (0xc00071bd60) (5) Data frame sent\nI0822 19:39:17.727000    2617 log.go:172] (0xc00068c790) Data frame received for 5\nI0822 19:39:17.727013    2617 log.go:172] (0xc00071bd60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:39:17.727038    2617 log.go:172] (0xc00068c790) Data frame received for 3\nI0822 19:39:17.727092    2617 log.go:172] (0xc00071bb80) (3) Data frame handling\nI0822 19:39:17.727120    2617 log.go:172] (0xc00071bb80) (3) Data frame sent\nI0822 19:39:17.727138    2617 log.go:172] (0xc00068c790) Data frame received for 3\nI0822 19:39:17.727152    2617 log.go:172] (0xc00071bb80) (3) Data frame handling\nI0822 19:39:17.728522    2617 log.go:172] (0xc00068c790) Data frame received for 1\nI0822 19:39:17.728548    2617 log.go:172] (0xc000688000) (1) Data frame handling\nI0822 19:39:17.728576    2617 log.go:172] (0xc000688000) (1) Data frame sent\nI0822 19:39:17.728608    2617 log.go:172] (0xc00068c790) (0xc000688000) Stream removed, broadcasting: 1\nI0822 19:39:17.728626    2617 log.go:172] (0xc00068c790) Go away received\nI0822 19:39:17.729158    2617 log.go:172] (0xc00068c790) (0xc000688000) Stream removed, broadcasting: 1\nI0822 19:39:17.729183    2617 log.go:172] (0xc00068c790) (0xc00071bb80) Stream removed, broadcasting: 3\nI0822 19:39:17.729196    2617 log.go:172] (0xc00068c790) (0xc00071bd60) Stream removed, broadcasting: 5\n"
Aug 22 19:39:17.742: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:39:17.742: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:39:17.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1557 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:39:17.943: INFO: stderr: "I0822 19:39:17.871892    2640 log.go:172] (0xc0009414a0) (0xc00099a820) Create stream\nI0822 19:39:17.871968    2640 log.go:172] (0xc0009414a0) (0xc00099a820) Stream added, broadcasting: 1\nI0822 19:39:17.876622    2640 log.go:172] (0xc0009414a0) Reply frame received for 1\nI0822 19:39:17.876673    2640 log.go:172] (0xc0009414a0) (0xc0006605a0) Create stream\nI0822 19:39:17.876689    2640 log.go:172] (0xc0009414a0) (0xc0006605a0) Stream added, broadcasting: 3\nI0822 19:39:17.877800    2640 log.go:172] (0xc0009414a0) Reply frame received for 3\nI0822 19:39:17.877837    2640 log.go:172] (0xc0009414a0) (0xc0002bd360) Create stream\nI0822 19:39:17.877848    2640 log.go:172] (0xc0009414a0) (0xc0002bd360) Stream added, broadcasting: 5\nI0822 19:39:17.878990    2640 log.go:172] (0xc0009414a0) Reply frame received for 5\nI0822 19:39:17.933122    2640 log.go:172] (0xc0009414a0) Data frame received for 5\nI0822 19:39:17.933181    2640 log.go:172] (0xc0002bd360) (5) Data frame handling\nI0822 19:39:17.933201    2640 log.go:172] (0xc0002bd360) (5) Data frame sent\nI0822 19:39:17.933221    2640 log.go:172] (0xc0009414a0) Data frame received for 5\nI0822 19:39:17.933233    2640 log.go:172] (0xc0002bd360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:39:17.933275    2640 log.go:172] (0xc0009414a0) Data frame received for 3\nI0822 19:39:17.933299    2640 log.go:172] (0xc0006605a0) (3) Data frame handling\nI0822 19:39:17.933314    2640 log.go:172] (0xc0006605a0) (3) Data frame sent\nI0822 19:39:17.933337    2640 log.go:172] (0xc0009414a0) Data frame received for 3\nI0822 19:39:17.933349    2640 log.go:172] (0xc0006605a0) (3) Data frame handling\nI0822 19:39:17.934726    2640 log.go:172] (0xc0009414a0) Data frame received for 1\nI0822 19:39:17.934749    2640 log.go:172] (0xc00099a820) (1) Data frame handling\nI0822 19:39:17.934767    2640 log.go:172] (0xc00099a820) (1) Data frame sent\nI0822 19:39:17.934908    2640 log.go:172] (0xc0009414a0) (0xc00099a820) Stream removed, broadcasting: 1\nI0822 19:39:17.934929    2640 log.go:172] (0xc0009414a0) Go away received\nI0822 19:39:17.935353    2640 log.go:172] (0xc0009414a0) (0xc00099a820) Stream removed, broadcasting: 1\nI0822 19:39:17.935382    2640 log.go:172] (0xc0009414a0) (0xc0006605a0) Stream removed, broadcasting: 3\nI0822 19:39:17.935401    2640 log.go:172] (0xc0009414a0) (0xc0002bd360) Stream removed, broadcasting: 5\n"
Aug 22 19:39:17.943: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:39:17.943: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:39:17.943: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 19:39:58.059: INFO: Deleting all statefulset in ns statefulset-1557
Aug 22 19:39:58.074: INFO: Scaling statefulset ss to 0
Aug 22 19:39:58.128: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:39:58.130: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:39:58.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1557" for this suite.

• [SLOW TEST:121.967 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":164,"skipped":2819,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:39:58.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4938
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4938
STEP: creating replication controller externalsvc in namespace services-4938
I0822 19:39:58.430791       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4938, replica count: 2
I0822 19:40:01.481149       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:40:04.481387       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:40:07.481649       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:40:10.481881       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 22 19:40:10.666: INFO: Creating new exec pod
Aug 22 19:40:21.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4938 execpod8s2c6 -- /bin/sh -x -c nslookup clusterip-service'
Aug 22 19:40:21.271: INFO: stderr: "I0822 19:40:21.203823    2662 log.go:172] (0xc000a2c370) (0xc0000c65a0) Create stream\nI0822 19:40:21.203863    2662 log.go:172] (0xc000a2c370) (0xc0000c65a0) Stream added, broadcasting: 1\nI0822 19:40:21.205427    2662 log.go:172] (0xc000a2c370) Reply frame received for 1\nI0822 19:40:21.205467    2662 log.go:172] (0xc000a2c370) (0xc0007b8000) Create stream\nI0822 19:40:21.205476    2662 log.go:172] (0xc000a2c370) (0xc0007b8000) Stream added, broadcasting: 3\nI0822 19:40:21.206174    2662 log.go:172] (0xc000a2c370) Reply frame received for 3\nI0822 19:40:21.206199    2662 log.go:172] (0xc000a2c370) (0xc0000c6640) Create stream\nI0822 19:40:21.206209    2662 log.go:172] (0xc000a2c370) (0xc0000c6640) Stream added, broadcasting: 5\nI0822 19:40:21.206951    2662 log.go:172] (0xc000a2c370) Reply frame received for 5\nI0822 19:40:21.256190    2662 log.go:172] (0xc000a2c370) Data frame received for 5\nI0822 19:40:21.256214    2662 log.go:172] (0xc0000c6640) (5) Data frame handling\nI0822 19:40:21.256222    2662 log.go:172] (0xc0000c6640) (5) Data frame sent\n+ nslookup clusterip-service\nI0822 19:40:21.261343    2662 log.go:172] (0xc000a2c370) Data frame received for 3\nI0822 19:40:21.261366    2662 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0822 19:40:21.261384    2662 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0822 19:40:21.262105    2662 log.go:172] (0xc000a2c370) Data frame received for 3\nI0822 19:40:21.262123    2662 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0822 19:40:21.262140    2662 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0822 19:40:21.262463    2662 log.go:172] (0xc000a2c370) Data frame received for 3\nI0822 19:40:21.262476    2662 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0822 19:40:21.262490    2662 log.go:172] (0xc000a2c370) Data frame received for 5\nI0822 19:40:21.262497    2662 log.go:172] (0xc0000c6640) (5) Data frame handling\nI0822 19:40:21.264011    2662 log.go:172] (0xc000a2c370) Data frame received for 1\nI0822 19:40:21.264032    2662 log.go:172] (0xc0000c65a0) (1) Data frame handling\nI0822 19:40:21.264040    2662 log.go:172] (0xc0000c65a0) (1) Data frame sent\nI0822 19:40:21.264050    2662 log.go:172] (0xc000a2c370) (0xc0000c65a0) Stream removed, broadcasting: 1\nI0822 19:40:21.264077    2662 log.go:172] (0xc000a2c370) Go away received\nI0822 19:40:21.264321    2662 log.go:172] (0xc000a2c370) (0xc0000c65a0) Stream removed, broadcasting: 1\nI0822 19:40:21.264332    2662 log.go:172] (0xc000a2c370) (0xc0007b8000) Stream removed, broadcasting: 3\nI0822 19:40:21.264340    2662 log.go:172] (0xc000a2c370) (0xc0000c6640) Stream removed, broadcasting: 5\n"
Aug 22 19:40:21.271: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4938.svc.cluster.local\tcanonical name = externalsvc.services-4938.svc.cluster.local.\nName:\texternalsvc.services-4938.svc.cluster.local\nAddress: 10.111.177.100\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4938, will wait for the garbage collector to delete the pods
Aug 22 19:40:21.372: INFO: Deleting ReplicationController externalsvc took: 5.235762ms
Aug 22 19:40:21.872: INFO: Terminating ReplicationController externalsvc pods took: 500.257395ms
Aug 22 19:40:32.355: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:40:32.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4938" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:34.852 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":165,"skipped":2828,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:40:33.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-h2tf
STEP: Creating a pod to test atomic-volume-subpath
Aug 22 19:40:34.206: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h2tf" in namespace "subpath-5348" to be "success or failure"
Aug 22 19:40:34.250: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Pending", Reason="", readiness=false. Elapsed: 43.779609ms
Aug 22 19:40:36.466: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25950352s
Aug 22 19:40:38.619: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412662812s
Aug 22 19:40:40.920: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713142942s
Aug 22 19:40:42.923: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 8.716164437s
Aug 22 19:40:44.927: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 10.720336904s
Aug 22 19:40:46.931: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 12.724775752s
Aug 22 19:40:48.935: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 14.728332354s
Aug 22 19:40:50.938: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 16.73207094s
Aug 22 19:40:52.984: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 18.777230122s
Aug 22 19:40:54.988: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 20.781124894s
Aug 22 19:40:56.992: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 22.785296776s
Aug 22 19:40:58.996: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Running", Reason="", readiness=true. Elapsed: 24.78969635s
Aug 22 19:41:01.000: INFO: Pod "pod-subpath-test-configmap-h2tf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.793694806s
STEP: Saw pod success
Aug 22 19:41:01.000: INFO: Pod "pod-subpath-test-configmap-h2tf" satisfied condition "success or failure"
Aug 22 19:41:01.003: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-h2tf container test-container-subpath-configmap-h2tf: 
STEP: delete the pod
Aug 22 19:41:01.182: INFO: Waiting for pod pod-subpath-test-configmap-h2tf to disappear
Aug 22 19:41:01.367: INFO: Pod pod-subpath-test-configmap-h2tf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h2tf
Aug 22 19:41:01.367: INFO: Deleting pod "pod-subpath-test-configmap-h2tf" in namespace "subpath-5348"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:01.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5348" for this suite.

• [SLOW TEST:28.378 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":166,"skipped":2831,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:01.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:41:01.518: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:02.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6328" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":167,"skipped":2840,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:02.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 22 19:41:08.732: INFO: &Pod{ObjectMeta:{send-events-415661cb-98b7-4fe2-93ef-421f62ed8608  events-5277 /api/v1/namespaces/events-5277/pods/send-events-415661cb-98b7-4fe2-93ef-421f62ed8608 1ee57f6e-32ae-461a-bb96-476f4867a961 2551575 0 2020-08-22 19:41:02 +0000 UTC   map[name:foo time:706409110] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-82bhx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-82bhx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-82bhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:41:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:41:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:41:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 19:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.138,StartTime:2020-08-22 19:41:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 19:41:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://65f062117bf7de9bc123344e7c0ac9913a1f73867de4b829c926fadd6a53846a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 22 19:41:10.737: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 22 19:41:12.762: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:12.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5277" for this suite.

• [SLOW TEST:10.393 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":168,"skipped":2879,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:12.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 22 19:41:25.917: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 22 19:41:25.922: INFO: Pod pod-with-prestop-http-hook still exists
Aug 22 19:41:27.922: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 22 19:41:27.928: INFO: Pod pod-with-prestop-http-hook still exists
Aug 22 19:41:29.922: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 22 19:41:30.110: INFO: Pod pod-with-prestop-http-hook still exists
Aug 22 19:41:31.922: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 22 19:41:31.926: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:31.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6829" for this suite.

• [SLOW TEST:18.990 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2880,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:31.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:41:32.517: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:41:34.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:41:36.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722092, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:41:40.146: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:41.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2724" for this suite.
STEP: Destroying namespace "webhook-2724-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.332 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":170,"skipped":2889,"failed":0}
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:41.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 22 19:41:41.453: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:41.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2450" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":171,"skipped":2889,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:42.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:41:47.828: INFO: Waiting up to 5m0s for pod "client-envvars-97990191-a40e-432a-8465-9e324133a918" in namespace "pods-9573" to be "success or failure"
Aug 22 19:41:47.868: INFO: Pod "client-envvars-97990191-a40e-432a-8465-9e324133a918": Phase="Pending", Reason="", readiness=false. Elapsed: 40.31919ms
Aug 22 19:41:49.873: INFO: Pod "client-envvars-97990191-a40e-432a-8465-9e324133a918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044806574s
Aug 22 19:41:51.967: INFO: Pod "client-envvars-97990191-a40e-432a-8465-9e324133a918": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139392362s
Aug 22 19:41:53.972: INFO: Pod "client-envvars-97990191-a40e-432a-8465-9e324133a918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14362667s
STEP: Saw pod success
Aug 22 19:41:53.972: INFO: Pod "client-envvars-97990191-a40e-432a-8465-9e324133a918" satisfied condition "success or failure"
Aug 22 19:41:53.975: INFO: Trying to get logs from node jerma-worker pod client-envvars-97990191-a40e-432a-8465-9e324133a918 container env3cont: 
STEP: delete the pod
Aug 22 19:41:54.429: INFO: Waiting for pod client-envvars-97990191-a40e-432a-8465-9e324133a918 to disappear
Aug 22 19:41:54.443: INFO: Pod client-envvars-97990191-a40e-432a-8465-9e324133a918 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:41:54.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9573" for this suite.

• [SLOW TEST:11.891 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2896,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:41:54.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:42:14.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9256" for this suite.
STEP: Destroying namespace "nsdeletetest-1751" for this suite.
Aug 22 19:42:14.151: INFO: Namespace nsdeletetest-1751 was already deleted
STEP: Destroying namespace "nsdeletetest-952" for this suite.

• [SLOW TEST:19.704 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":173,"skipped":2912,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:42:14.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:42:15.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:42:17.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:42:19.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722135, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:42:22.350: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:42:22.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8306-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:42:23.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1786" for this suite.
STEP: Destroying namespace "webhook-1786-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.489 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":174,"skipped":2940,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:42:23.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-634
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 22 19:42:23.738: INFO: Found 0 stateful pods, waiting for 3
Aug 22 19:42:33.744: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:42:33.744: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:42:33.744: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 22 19:42:43.743: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:42:43.743: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:42:43.743: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:42:43.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-634 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:42:44.246: INFO: stderr: "I0822 19:42:43.876561    2700 log.go:172] (0xc000a386e0) (0xc0009cc000) Create stream\nI0822 19:42:43.876637    2700 log.go:172] (0xc000a386e0) (0xc0009cc000) Stream added, broadcasting: 1\nI0822 19:42:43.879472    2700 log.go:172] (0xc000a386e0) Reply frame received for 1\nI0822 19:42:43.879526    2700 log.go:172] (0xc000a386e0) (0xc0009cc0a0) Create stream\nI0822 19:42:43.879541    2700 log.go:172] (0xc000a386e0) (0xc0009cc0a0) Stream added, broadcasting: 3\nI0822 19:42:43.880857    2700 log.go:172] (0xc000a386e0) Reply frame received for 3\nI0822 19:42:43.880911    2700 log.go:172] (0xc000a386e0) (0xc000677b80) Create stream\nI0822 19:42:43.880937    2700 log.go:172] (0xc000a386e0) (0xc000677b80) Stream added, broadcasting: 5\nI0822 19:42:43.881859    2700 log.go:172] (0xc000a386e0) Reply frame received for 5\nI0822 19:42:43.975832    2700 log.go:172] (0xc000a386e0) Data frame received for 5\nI0822 19:42:43.975862    2700 log.go:172] (0xc000677b80) (5) Data frame handling\nI0822 19:42:43.975883    2700 log.go:172] (0xc000677b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:42:44.236302    2700 log.go:172] (0xc000a386e0) Data frame received for 3\nI0822 19:42:44.236329    2700 log.go:172] (0xc0009cc0a0) (3) Data frame handling\nI0822 19:42:44.236353    2700 log.go:172] (0xc0009cc0a0) (3) Data frame sent\nI0822 19:42:44.236361    2700 log.go:172] (0xc000a386e0) Data frame received for 3\nI0822 19:42:44.236365    2700 log.go:172] (0xc0009cc0a0) (3) Data frame handling\nI0822 19:42:44.236560    2700 log.go:172] (0xc000a386e0) Data frame received for 5\nI0822 19:42:44.236571    2700 log.go:172] (0xc000677b80) (5) Data frame handling\nI0822 19:42:44.238258    2700 log.go:172] (0xc000a386e0) Data frame received for 1\nI0822 19:42:44.238276    2700 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0822 19:42:44.238286    2700 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0822 19:42:44.238298    2700 log.go:172] (0xc000a386e0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0822 19:42:44.238314    2700 log.go:172] (0xc000a386e0) Go away received\nI0822 19:42:44.238651    2700 log.go:172] (0xc000a386e0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0822 19:42:44.238674    2700 log.go:172] (0xc000a386e0) (0xc0009cc0a0) Stream removed, broadcasting: 3\nI0822 19:42:44.238681    2700 log.go:172] (0xc000a386e0) (0xc000677b80) Stream removed, broadcasting: 5\n"
Aug 22 19:42:44.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:42:44.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 22 19:42:54.275: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 22 19:43:04.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-634 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:43:04.977: INFO: stderr: "I0822 19:43:04.903477    2722 log.go:172] (0xc0009ea8f0) (0xc0007c01e0) Create stream\nI0822 19:43:04.903527    2722 log.go:172] (0xc0009ea8f0) (0xc0007c01e0) Stream added, broadcasting: 1\nI0822 19:43:04.915556    2722 log.go:172] (0xc0009ea8f0) Reply frame received for 1\nI0822 19:43:04.915600    2722 log.go:172] (0xc0009ea8f0) (0xc000434640) Create stream\nI0822 19:43:04.915609    2722 log.go:172] (0xc0009ea8f0) (0xc000434640) Stream added, broadcasting: 3\nI0822 19:43:04.916883    2722 log.go:172] (0xc0009ea8f0) Reply frame received for 3\nI0822 19:43:04.916924    2722 log.go:172] (0xc0009ea8f0) (0xc0005a65a0) Create stream\nI0822 19:43:04.916934    2722 log.go:172] (0xc0009ea8f0) (0xc0005a65a0) Stream added, broadcasting: 5\nI0822 19:43:04.919006    2722 log.go:172] (0xc0009ea8f0) Reply frame received for 5\nI0822 19:43:04.966267    2722 log.go:172] (0xc0009ea8f0) Data frame received for 5\nI0822 19:43:04.966320    2722 log.go:172] (0xc0005a65a0) (5) Data frame handling\nI0822 19:43:04.966339    2722 log.go:172] (0xc0005a65a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:43:04.966370    2722 log.go:172] (0xc0009ea8f0) Data frame received for 3\nI0822 19:43:04.966395    2722 log.go:172] (0xc000434640) (3) Data frame handling\nI0822 19:43:04.966419    2722 log.go:172] (0xc000434640) (3) Data frame sent\nI0822 19:43:04.966438    2722 log.go:172] (0xc0009ea8f0) Data frame received for 3\nI0822 19:43:04.966459    2722 log.go:172] (0xc000434640) (3) Data frame handling\nI0822 19:43:04.966483    2722 log.go:172] (0xc0009ea8f0) Data frame received for 5\nI0822 19:43:04.966494    2722 log.go:172] (0xc0005a65a0) (5) Data frame handling\nI0822 19:43:04.968073    2722 log.go:172] (0xc0009ea8f0) Data frame received for 1\nI0822 19:43:04.968095    2722 log.go:172] (0xc0007c01e0) (1) Data frame handling\nI0822 19:43:04.968107    2722 log.go:172] (0xc0007c01e0) (1) Data frame sent\nI0822 19:43:04.968123    2722 log.go:172] (0xc0009ea8f0) (0xc0007c01e0) Stream removed, broadcasting: 1\nI0822 19:43:04.968177    2722 log.go:172] (0xc0009ea8f0) Go away received\nI0822 19:43:04.968594    2722 log.go:172] (0xc0009ea8f0) (0xc0007c01e0) Stream removed, broadcasting: 1\nI0822 19:43:04.968619    2722 log.go:172] (0xc0009ea8f0) (0xc000434640) Stream removed, broadcasting: 3\nI0822 19:43:04.968631    2722 log.go:172] (0xc0009ea8f0) (0xc0005a65a0) Stream removed, broadcasting: 5\n"
Aug 22 19:43:04.977: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:43:04.977: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:43:15.258: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
Aug 22 19:43:15.258: INFO: Waiting for Pod statefulset-634/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 19:43:15.258: INFO: Waiting for Pod statefulset-634/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 19:43:25.265: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
Aug 22 19:43:25.265: INFO: Waiting for Pod statefulset-634/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 19:43:25.265: INFO: Waiting for Pod statefulset-634/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 19:43:35.264: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
Aug 22 19:43:35.264: INFO: Waiting for Pod statefulset-634/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 19:43:45.366: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 22 19:43:55.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-634 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:43:55.533: INFO: stderr: "I0822 19:43:55.399810    2746 log.go:172] (0xc000977550) (0xc000914640) Create stream\nI0822 19:43:55.399875    2746 log.go:172] (0xc000977550) (0xc000914640) Stream added, broadcasting: 1\nI0822 19:43:55.404168    2746 log.go:172] (0xc000977550) Reply frame received for 1\nI0822 19:43:55.404214    2746 log.go:172] (0xc000977550) (0xc0006c6640) Create stream\nI0822 19:43:55.404231    2746 log.go:172] (0xc000977550) (0xc0006c6640) Stream added, broadcasting: 3\nI0822 19:43:55.405268    2746 log.go:172] (0xc000977550) Reply frame received for 3\nI0822 19:43:55.405306    2746 log.go:172] (0xc000977550) (0xc000577400) Create stream\nI0822 19:43:55.405315    2746 log.go:172] (0xc000977550) (0xc000577400) Stream added, broadcasting: 5\nI0822 19:43:55.406097    2746 log.go:172] (0xc000977550) Reply frame received for 5\nI0822 19:43:55.480665    2746 log.go:172] (0xc000977550) Data frame received for 5\nI0822 19:43:55.480689    2746 log.go:172] (0xc000577400) (5) Data frame handling\nI0822 19:43:55.480703    2746 log.go:172] (0xc000577400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:43:55.519566    2746 log.go:172] (0xc000977550) Data frame received for 5\nI0822 19:43:55.519605    2746 log.go:172] (0xc000577400) (5) Data frame handling\nI0822 19:43:55.519629    2746 log.go:172] (0xc000977550) Data frame received for 3\nI0822 19:43:55.519636    2746 log.go:172] (0xc0006c6640) (3) Data frame handling\nI0822 19:43:55.519650    2746 log.go:172] (0xc0006c6640) (3) Data frame sent\nI0822 19:43:55.519667    2746 log.go:172] (0xc000977550) Data frame received for 3\nI0822 19:43:55.519676    2746 log.go:172] (0xc0006c6640) (3) Data frame handling\nI0822 19:43:55.522339    2746 log.go:172] (0xc000977550) Data frame received for 1\nI0822 19:43:55.522366    2746 log.go:172] (0xc000914640) (1) Data frame handling\nI0822 19:43:55.522381    2746 log.go:172] (0xc000914640) (1) Data frame sent\nI0822 19:43:55.522396    2746 log.go:172] (0xc000977550) (0xc000914640) Stream removed, broadcasting: 1\nI0822 19:43:55.522417    2746 log.go:172] (0xc000977550) Go away received\nI0822 19:43:55.522769    2746 log.go:172] (0xc000977550) (0xc000914640) Stream removed, broadcasting: 1\nI0822 19:43:55.522786    2746 log.go:172] (0xc000977550) (0xc0006c6640) Stream removed, broadcasting: 3\nI0822 19:43:55.522793    2746 log.go:172] (0xc000977550) (0xc000577400) Stream removed, broadcasting: 5\n"
Aug 22 19:43:55.533: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:43:55.533: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:44:05.563: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 22 19:44:15.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-634 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:44:15.840: INFO: stderr: "I0822 19:44:15.746801    2767 log.go:172] (0xc000548dc0) (0xc000a1a140) Create stream\nI0822 19:44:15.746865    2767 log.go:172] (0xc000548dc0) (0xc000a1a140) Stream added, broadcasting: 1\nI0822 19:44:15.749405    2767 log.go:172] (0xc000548dc0) Reply frame received for 1\nI0822 19:44:15.749455    2767 log.go:172] (0xc000548dc0) (0xc0008f2000) Create stream\nI0822 19:44:15.749477    2767 log.go:172] (0xc000548dc0) (0xc0008f2000) Stream added, broadcasting: 3\nI0822 19:44:15.750580    2767 log.go:172] (0xc000548dc0) Reply frame received for 3\nI0822 19:44:15.750612    2767 log.go:172] (0xc000548dc0) (0xc00070ab40) Create stream\nI0822 19:44:15.750621    2767 log.go:172] (0xc000548dc0) (0xc00070ab40) Stream added, broadcasting: 5\nI0822 19:44:15.751511    2767 log.go:172] (0xc000548dc0) Reply frame received for 5\nI0822 19:44:15.830601    2767 log.go:172] (0xc000548dc0) Data frame received for 3\nI0822 19:44:15.830640    2767 log.go:172] (0xc0008f2000) (3) Data frame handling\nI0822 19:44:15.830657    2767 log.go:172] (0xc0008f2000) (3) Data frame sent\nI0822 19:44:15.830679    2767 log.go:172] (0xc000548dc0) Data frame received for 3\nI0822 19:44:15.830690    2767 log.go:172] (0xc0008f2000) (3) Data frame handling\nI0822 19:44:15.830741    2767 log.go:172] (0xc000548dc0) Data frame received for 5\nI0822 19:44:15.830767    2767 log.go:172] (0xc00070ab40) (5) Data frame handling\nI0822 19:44:15.830786    2767 log.go:172] (0xc00070ab40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:44:15.830805    2767 log.go:172] (0xc000548dc0) Data frame received for 5\nI0822 19:44:15.830832    2767 log.go:172] (0xc00070ab40) (5) Data frame handling\nI0822 19:44:15.832077    2767 log.go:172] (0xc000548dc0) Data frame received for 1\nI0822 19:44:15.832101    2767 log.go:172] (0xc000a1a140) (1) Data frame handling\nI0822 19:44:15.832117    2767 log.go:172] (0xc000a1a140) (1) Data frame sent\nI0822 19:44:15.832127    2767 log.go:172] (0xc000548dc0) (0xc000a1a140) Stream removed, broadcasting: 1\nI0822 19:44:15.832136    2767 log.go:172] (0xc000548dc0) Go away received\nI0822 19:44:15.832609    2767 log.go:172] (0xc000548dc0) (0xc000a1a140) Stream removed, broadcasting: 1\nI0822 19:44:15.832627    2767 log.go:172] (0xc000548dc0) (0xc0008f2000) Stream removed, broadcasting: 3\nI0822 19:44:15.832635    2767 log.go:172] (0xc000548dc0) (0xc00070ab40) Stream removed, broadcasting: 5\n"
Aug 22 19:44:15.841: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:44:15.841: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:44:25.902: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
Aug 22 19:44:25.902: INFO: Waiting for Pod statefulset-634/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 22 19:44:25.902: INFO: Waiting for Pod statefulset-634/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 22 19:44:35.909: INFO: Waiting for StatefulSet statefulset-634/ss2 to complete update
Aug 22 19:44:35.909: INFO: Waiting for Pod statefulset-634/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 19:44:45.910: INFO: Deleting all statefulset in ns statefulset-634
Aug 22 19:44:45.912: INFO: Scaling statefulset ss2 to 0
Aug 22 19:45:05.943: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:45:05.945: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:45:05.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-634" for this suite.

• [SLOW TEST:162.328 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":175,"skipped":2944,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:45:05.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:45:06.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9387" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":176,"skipped":2946,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:45:06.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2042
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2042
STEP: Creating statefulset with conflicting port in namespace statefulset-2042
STEP: Waiting until pod test-pod will start running in namespace statefulset-2042
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2042
Aug 22 19:45:10.388: INFO: Observed stateful pod in namespace: statefulset-2042, name: ss-0, uid: d8441681-84fe-4c65-8638-be83603dc764, status phase: Pending. Waiting for statefulset controller to delete.
Aug 22 19:45:10.545: INFO: Observed stateful pod in namespace: statefulset-2042, name: ss-0, uid: d8441681-84fe-4c65-8638-be83603dc764, status phase: Failed. Waiting for statefulset controller to delete.
Aug 22 19:45:10.808: INFO: Observed stateful pod in namespace: statefulset-2042, name: ss-0, uid: d8441681-84fe-4c65-8638-be83603dc764, status phase: Failed. Waiting for statefulset controller to delete.
Aug 22 19:45:11.162: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2042
STEP: Removing pod with conflicting port in namespace statefulset-2042
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2042 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 19:45:15.573: INFO: Deleting all statefulset in ns statefulset-2042
Aug 22 19:45:15.575: INFO: Scaling statefulset ss to 0
Aug 22 19:45:25.598: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:45:25.601: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:45:25.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2042" for this suite.

• [SLOW TEST:19.368 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":177,"skipped":2946,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:45:25.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-06865895-dcad-45ab-9cc9-c3073322f5a4
STEP: Creating a pod to test consume secrets
Aug 22 19:45:25.720: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7" in namespace "projected-6216" to be "success or failure"
Aug 22 19:45:25.724: INFO: Pod "pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699972ms
Aug 22 19:45:27.728: INFO: Pod "pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008071599s
Aug 22 19:45:29.733: INFO: Pod "pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012138976s
STEP: Saw pod success
Aug 22 19:45:29.733: INFO: Pod "pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7" satisfied condition "success or failure"
Aug 22 19:45:29.736: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7 container secret-volume-test: 
STEP: delete the pod
Aug 22 19:45:29.794: INFO: Waiting for pod pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7 to disappear
Aug 22 19:45:29.802: INFO: Pod pod-projected-secrets-394d0b2e-3517-4d07-ab6e-0f095e6141a7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:45:29.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6216" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2947,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:45:29.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 19:45:37.957: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.961: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.964: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.967: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.977: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.981: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.984: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.987: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:37.996: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:45:42.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.003: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.006: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.026: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:43.035: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:45:48.000: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.003: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.005: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.007: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.041: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.043: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.046: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.049: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:48.053: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:45:53.001: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.004: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.010: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.019: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.022: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:53.032: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:45:58.029: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.032: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.035: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.039: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.105: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.108: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.110: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.114: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:45:58.119: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:46:03.000: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.004: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.011: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.030: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local from pod dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb: the server could not find the requested resource (get pods dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb)
Aug 22 19:46:03.036: INFO: Lookups using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9198.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9198.svc.cluster.local jessie_udp@dns-test-service-2.dns-9198.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9198.svc.cluster.local]

Aug 22 19:46:08.053: INFO: DNS probes using dns-9198/dns-test-d57f516f-5ebd-4d70-8137-477b70e9a5cb succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:08.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9198" for this suite.

• [SLOW TEST:38.676 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":179,"skipped":2947,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:08.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:26.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9550" for this suite.

• [SLOW TEST:17.658 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":180,"skipped":2965,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:26.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-fd1be97e-5f3a-4946-a4f6-9cace414f62d
STEP: Creating secret with name s-test-opt-upd-644f03cd-521f-4cca-95e9-2711a7ed0619
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fd1be97e-5f3a-4946-a4f6-9cace414f62d
STEP: Updating secret s-test-opt-upd-644f03cd-521f-4cca-95e9-2711a7ed0619
STEP: Creating secret with name s-test-opt-create-8ab43670-d542-4caf-a789-756565f67111
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:34.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2385" for this suite.

• [SLOW TEST:8.249 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2967,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:34.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 22 19:46:34.496: INFO: Waiting up to 5m0s for pod "pod-84659672-6386-4c25-aafa-9816ebb014ac" in namespace "emptydir-6312" to be "success or failure"
Aug 22 19:46:34.499: INFO: Pod "pod-84659672-6386-4c25-aafa-9816ebb014ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424566ms
Aug 22 19:46:36.503: INFO: Pod "pod-84659672-6386-4c25-aafa-9816ebb014ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006892223s
Aug 22 19:46:38.508: INFO: Pod "pod-84659672-6386-4c25-aafa-9816ebb014ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011168622s
STEP: Saw pod success
Aug 22 19:46:38.508: INFO: Pod "pod-84659672-6386-4c25-aafa-9816ebb014ac" satisfied condition "success or failure"
Aug 22 19:46:38.511: INFO: Trying to get logs from node jerma-worker2 pod pod-84659672-6386-4c25-aafa-9816ebb014ac container test-container: 
STEP: delete the pod
Aug 22 19:46:38.591: INFO: Waiting for pod pod-84659672-6386-4c25-aafa-9816ebb014ac to disappear
Aug 22 19:46:38.599: INFO: Pod pod-84659672-6386-4c25-aafa-9816ebb014ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:38.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6312" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2981,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:38.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:46:38.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 22 19:46:41.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4657 create -f -'
Aug 22 19:46:46.091: INFO: stderr: ""
Aug 22 19:46:46.091: INFO: stdout: "e2e-test-crd-publish-openapi-1806-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 22 19:46:46.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4657 delete e2e-test-crd-publish-openapi-1806-crds test-cr'
Aug 22 19:46:46.990: INFO: stderr: ""
Aug 22 19:46:46.990: INFO: stdout: "e2e-test-crd-publish-openapi-1806-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 22 19:46:46.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4657 apply -f -'
Aug 22 19:46:47.378: INFO: stderr: ""
Aug 22 19:46:47.379: INFO: stdout: "e2e-test-crd-publish-openapi-1806-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 22 19:46:47.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4657 delete e2e-test-crd-publish-openapi-1806-crds test-cr'
Aug 22 19:46:47.645: INFO: stderr: ""
Aug 22 19:46:47.645: INFO: stdout: "e2e-test-crd-publish-openapi-1806-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 22 19:46:47.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1806-crds'
Aug 22 19:46:47.913: INFO: stderr: ""
Aug 22 19:46:47.913: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1806-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:49.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4657" for this suite.

• [SLOW TEST:11.204 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":183,"skipped":2989,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:49.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 22 19:46:49.878: INFO: Waiting up to 5m0s for pod "pod-635c6600-47ed-4790-a701-28328b3833bc" in namespace "emptydir-5744" to be "success or failure"
Aug 22 19:46:50.029: INFO: Pod "pod-635c6600-47ed-4790-a701-28328b3833bc": Phase="Pending", Reason="", readiness=false. Elapsed: 150.071001ms
Aug 22 19:46:52.032: INFO: Pod "pod-635c6600-47ed-4790-a701-28328b3833bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153154413s
Aug 22 19:46:54.035: INFO: Pod "pod-635c6600-47ed-4790-a701-28328b3833bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156896637s
Aug 22 19:46:56.039: INFO: Pod "pod-635c6600-47ed-4790-a701-28328b3833bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160672068s
STEP: Saw pod success
Aug 22 19:46:56.039: INFO: Pod "pod-635c6600-47ed-4790-a701-28328b3833bc" satisfied condition "success or failure"
Aug 22 19:46:56.051: INFO: Trying to get logs from node jerma-worker2 pod pod-635c6600-47ed-4790-a701-28328b3833bc container test-container: 
STEP: delete the pod
Aug 22 19:46:56.077: INFO: Waiting for pod pod-635c6600-47ed-4790-a701-28328b3833bc to disappear
Aug 22 19:46:56.115: INFO: Pod pod-635c6600-47ed-4790-a701-28328b3833bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:46:56.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5744" for this suite.

• [SLOW TEST:6.383 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:46:56.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-a6ece689-4c40-4513-a7c4-4a4e94c7273f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a6ece689-4c40-4513-a7c4-4a4e94c7273f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:47:02.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9846" for this suite.

• [SLOW TEST:6.377 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3024,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:47:02.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 22 19:47:02.655: INFO: Waiting up to 5m0s for pod "pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e" in namespace "emptydir-9783" to be "success or failure"
Aug 22 19:47:02.709: INFO: Pod "pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.452107ms
Aug 22 19:47:04.913: INFO: Pod "pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257217974s
Aug 22 19:47:06.919: INFO: Pod "pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.263172099s
STEP: Saw pod success
Aug 22 19:47:06.919: INFO: Pod "pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e" satisfied condition "success or failure"
Aug 22 19:47:06.945: INFO: Trying to get logs from node jerma-worker pod pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e container test-container: 
STEP: delete the pod
Aug 22 19:47:06.973: INFO: Waiting for pod pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e to disappear
Aug 22 19:47:07.008: INFO: Pod pod-1cd8a3d1-e5a8-43d7-84af-b6cd8c6f980e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:47:07.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9783" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3026,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:47:07.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:47:08.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:47:10.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:47:12.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722428, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:47:15.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 22 19:47:19.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4971 to-be-attached-pod -i -c=container1'
Aug 22 19:47:20.021: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:47:20.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4971" for this suite.
STEP: Destroying namespace "webhook-4971-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.218 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":187,"skipped":3036,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:47:20.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 19:47:20.486: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 19:47:20.562: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 19:47:20.646: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 19:47:20.666: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.666: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 19:47:20.666: INFO: to-be-attached-pod from webhook-4971 started at 2020-08-22 19:47:15 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.666: INFO: 	Container container1 ready: true, restart count 0
Aug 22 19:47:20.666: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.666: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:47:20.666: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.666: INFO: 	Container app ready: true, restart count 0
Aug 22 19:47:20.666: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 19:47:20.671: INFO: pod-configmaps-28fbc91e-461f-4eeb-91c2-bfb0c34d4706 from configmap-9846 started at 2020-08-22 19:46:56 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.671: INFO: 	Container configmap-volume-test ready: false, restart count 0
Aug 22 19:47:20.671: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.671: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 19:47:20.671: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.671: INFO: 	Container app ready: true, restart count 0
Aug 22 19:47:20.671: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 19:47:20.671: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ae2e0380-a944-4cb8-bc36-2b4eda5baa14 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-ae2e0380-a944-4cb8-bc36-2b4eda5baa14 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ae2e0380-a944-4cb8-bc36-2b4eda5baa14
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:52:29.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8537" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:309.268 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":188,"skipped":3040,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:52:29.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 22 19:52:30.351: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 22 19:52:30.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:30.764: INFO: stderr: ""
Aug 22 19:52:30.764: INFO: stdout: "service/agnhost-slave created\n"
Aug 22 19:52:30.764: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 22 19:52:30.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:31.297: INFO: stderr: ""
Aug 22 19:52:31.297: INFO: stdout: "service/agnhost-master created\n"
Aug 22 19:52:31.297: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 22 19:52:31.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:31.647: INFO: stderr: ""
Aug 22 19:52:31.647: INFO: stdout: "service/frontend created\n"
Aug 22 19:52:31.647: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 22 19:52:31.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:31.997: INFO: stderr: ""
Aug 22 19:52:31.997: INFO: stdout: "deployment.apps/frontend created\n"
Aug 22 19:52:31.998: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 22 19:52:31.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:32.346: INFO: stderr: ""
Aug 22 19:52:32.346: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 22 19:52:32.346: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 22 19:52:32.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6380'
Aug 22 19:52:33.389: INFO: stderr: ""
Aug 22 19:52:33.389: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 22 19:52:33.389: INFO: Waiting for all frontend pods to be Running.
Aug 22 19:52:43.440: INFO: Waiting for frontend to serve content.
Aug 22 19:52:44.750: INFO: Trying to add a new entry to the guestbook.
Aug 22 19:52:44.766: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 22 19:52:44.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.155: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 19:52:45.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.343: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 19:52:45.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.527: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.527: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 19:52:45.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.666: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.666: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 19:52:45.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.835: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 19:52:45.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6380'
Aug 22 19:52:45.994: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 19:52:45.994: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:52:45.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6380" for this suite.

• [SLOW TEST:16.527 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":189,"skipped":3044,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:52:46.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 22 19:52:47.145: INFO: Waiting up to 5m0s for pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c" in namespace "emptydir-9085" to be "success or failure"
Aug 22 19:52:47.473: INFO: Pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c": Phase="Pending", Reason="", readiness=false. Elapsed: 327.820401ms
Aug 22 19:52:49.636: INFO: Pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491237693s
Aug 22 19:52:51.666: INFO: Pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520441634s
Aug 22 19:52:53.872: INFO: Pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.726711816s
STEP: Saw pod success
Aug 22 19:52:53.872: INFO: Pod "pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c" satisfied condition "success or failure"
Aug 22 19:52:53.919: INFO: Trying to get logs from node jerma-worker pod pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c container test-container: 
STEP: delete the pod
Aug 22 19:52:54.008: INFO: Waiting for pod pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c to disappear
Aug 22 19:52:54.043: INFO: Pod pod-5c86de2b-42e8-4256-b1b7-89dede6fde7c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:52:54.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9085" for this suite.

• [SLOW TEST:8.038 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3094,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:52:54.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-0bf073b0-7b5f-4c2d-8420-71a984773732
STEP: Creating a pod to test consume secrets
Aug 22 19:52:54.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6" in namespace "projected-9947" to be "success or failure"
Aug 22 19:52:54.475: INFO: Pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6": Phase="Pending", Reason="", readiness=false. Elapsed: 243.826658ms
Aug 22 19:52:56.571: INFO: Pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339279718s
Aug 22 19:52:58.574: INFO: Pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342803805s
Aug 22 19:53:00.654: INFO: Pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42302916s
STEP: Saw pod success
Aug 22 19:53:00.654: INFO: Pod "pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6" satisfied condition "success or failure"
Aug 22 19:53:00.657: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6 container projected-secret-volume-test: 
STEP: delete the pod
Aug 22 19:53:00.687: INFO: Waiting for pod pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6 to disappear
Aug 22 19:53:00.691: INFO: Pod pod-projected-secrets-d68f2c0f-ba00-4ffc-82b5-ff99fa0e33b6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9947" for this suite.

• [SLOW TEST:6.631 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3111,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:00.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:18.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8098" for this suite.

• [SLOW TEST:17.405 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":192,"skipped":3115,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:18.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:53:18.278: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7" in namespace "security-context-test-5805" to be "success or failure"
Aug 22 19:53:18.316: INFO: Pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.756013ms
Aug 22 19:53:20.322: INFO: Pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043686728s
Aug 22 19:53:22.326: INFO: Pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047800997s
Aug 22 19:53:24.331: INFO: Pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053316014s
Aug 22 19:53:24.331: INFO: Pod "busybox-user-65534-fc01f04a-deb2-42f1-9a92-c93bb906efa7" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:24.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5805" for this suite.

• [SLOW TEST:6.281 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3122,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:24.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-3424f822-bc33-4cdd-bdb1-c59156d8d372
STEP: Creating secret with name secret-projected-all-test-volume-2facf20a-3b89-4f12-9145-c0b747dde591
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 22 19:53:24.697: INFO: Waiting up to 5m0s for pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013" in namespace "projected-2964" to be "success or failure"
Aug 22 19:53:24.822: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013": Phase="Pending", Reason="", readiness=false. Elapsed: 125.434156ms
Aug 22 19:53:26.830: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133296957s
Aug 22 19:53:28.909: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211977854s
Aug 22 19:53:30.966: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013": Phase="Running", Reason="", readiness=true. Elapsed: 6.269193342s
Aug 22 19:53:32.987: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.289924305s
STEP: Saw pod success
Aug 22 19:53:32.987: INFO: Pod "projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013" satisfied condition "success or failure"
Aug 22 19:53:32.990: INFO: Trying to get logs from node jerma-worker pod projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013 container projected-all-volume-test: 
STEP: delete the pod
Aug 22 19:53:33.314: INFO: Waiting for pod projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013 to disappear
Aug 22 19:53:33.345: INFO: Pod projected-volume-4817f5e6-c2e2-41d4-b0b0-6a2fe3490013 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:33.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2964" for this suite.

• [SLOW TEST:8.966 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3131,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:33.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 22 19:53:41.711: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:41.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2669" for this suite.

• [SLOW TEST:8.607 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3142,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:41.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 22 19:53:42.233: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555276 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 22 19:53:42.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555277 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 22 19:53:42.233: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555278 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 22 19:53:52.328: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555317 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 22 19:53:52.328: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555318 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 22 19:53:52.328: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-label-changed 2c026131-8ab5-47c1-8899-0f1f307efd85 2555319 0 2020-08-22 19:53:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:53:52.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7101" for this suite.

• [SLOW TEST:10.553 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":196,"skipped":3159,"failed":0}
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:53:52.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:53:52.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4672
I0822 19:53:52.851302       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4672, replica count: 1
I0822 19:53:53.901765       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:53:54.902043       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:53:55.902289       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 19:53:56.072: INFO: Created: latency-svc-fpkb6
Aug 22 19:53:56.100: INFO: Got endpoints: latency-svc-fpkb6 [97.864224ms]
Aug 22 19:53:56.286: INFO: Created: latency-svc-b5kjr
Aug 22 19:53:56.293: INFO: Got endpoints: latency-svc-b5kjr [192.67725ms]
Aug 22 19:53:56.380: INFO: Created: latency-svc-wf6qb
Aug 22 19:53:56.432: INFO: Created: latency-svc-w6wz2
Aug 22 19:53:56.432: INFO: Got endpoints: latency-svc-wf6qb [331.500904ms]
Aug 22 19:53:56.462: INFO: Got endpoints: latency-svc-w6wz2 [361.845364ms]
Aug 22 19:53:56.478: INFO: Created: latency-svc-zgsdw
Aug 22 19:53:56.589: INFO: Got endpoints: latency-svc-zgsdw [488.981188ms]
Aug 22 19:53:56.650: INFO: Created: latency-svc-xp8kf
Aug 22 19:53:56.669: INFO: Got endpoints: latency-svc-xp8kf [568.731654ms]
Aug 22 19:53:57.159: INFO: Created: latency-svc-2sltm
Aug 22 19:53:57.661: INFO: Got endpoints: latency-svc-2sltm [1.561028176s]
Aug 22 19:53:57.665: INFO: Created: latency-svc-qrxcz
Aug 22 19:53:57.822: INFO: Got endpoints: latency-svc-qrxcz [1.721672106s]
Aug 22 19:53:57.903: INFO: Created: latency-svc-kcsvs
Aug 22 19:53:58.236: INFO: Got endpoints: latency-svc-kcsvs [2.135427438s]
Aug 22 19:53:58.460: INFO: Created: latency-svc-5z69b
Aug 22 19:53:58.490: INFO: Got endpoints: latency-svc-5z69b [2.390107361s]
Aug 22 19:53:58.577: INFO: Created: latency-svc-fs4rp
Aug 22 19:53:58.580: INFO: Got endpoints: latency-svc-fs4rp [2.4797457s]
Aug 22 19:53:58.795: INFO: Created: latency-svc-8bkwj
Aug 22 19:53:58.823: INFO: Got endpoints: latency-svc-8bkwj [2.723175534s]
Aug 22 19:53:59.397: INFO: Created: latency-svc-q9xd8
Aug 22 19:53:59.643: INFO: Got endpoints: latency-svc-q9xd8 [3.542557062s]
Aug 22 19:53:59.646: INFO: Created: latency-svc-78f5p
Aug 22 19:53:59.687: INFO: Got endpoints: latency-svc-78f5p [3.586061711s]
Aug 22 19:53:59.722: INFO: Created: latency-svc-6blrv
Aug 22 19:53:59.853: INFO: Got endpoints: latency-svc-6blrv [3.752281411s]
Aug 22 19:54:00.161: INFO: Created: latency-svc-f7rtn
Aug 22 19:54:00.571: INFO: Got endpoints: latency-svc-f7rtn [4.471165344s]
Aug 22 19:54:00.617: INFO: Created: latency-svc-lbkmv
Aug 22 19:54:00.640: INFO: Got endpoints: latency-svc-lbkmv [4.347102239s]
Aug 22 19:54:01.253: INFO: Created: latency-svc-wz67s
Aug 22 19:54:01.311: INFO: Got endpoints: latency-svc-wz67s [4.87847271s]
Aug 22 19:54:01.458: INFO: Created: latency-svc-ffvzp
Aug 22 19:54:01.473: INFO: Got endpoints: latency-svc-ffvzp [5.010905193s]
Aug 22 19:54:01.954: INFO: Created: latency-svc-x7vfw
Aug 22 19:54:01.959: INFO: Got endpoints: latency-svc-x7vfw [5.369193625s]
Aug 22 19:54:02.029: INFO: Created: latency-svc-62w8n
Aug 22 19:54:02.191: INFO: Got endpoints: latency-svc-62w8n [5.522002306s]
Aug 22 19:54:02.199: INFO: Created: latency-svc-kch46
Aug 22 19:54:02.224: INFO: Got endpoints: latency-svc-kch46 [4.562465555s]
Aug 22 19:54:02.367: INFO: Created: latency-svc-nbxkz
Aug 22 19:54:02.371: INFO: Got endpoints: latency-svc-nbxkz [4.548341889s]
Aug 22 19:54:02.422: INFO: Created: latency-svc-5mlm8
Aug 22 19:54:02.793: INFO: Got endpoints: latency-svc-5mlm8 [4.557049239s]
Aug 22 19:54:02.805: INFO: Created: latency-svc-2plh6
Aug 22 19:54:02.859: INFO: Got endpoints: latency-svc-2plh6 [4.368311049s]
Aug 22 19:54:03.075: INFO: Created: latency-svc-tdks8
Aug 22 19:54:03.080: INFO: Got endpoints: latency-svc-tdks8 [4.499275964s]
Aug 22 19:54:03.339: INFO: Created: latency-svc-dqpwr
Aug 22 19:54:03.476: INFO: Got endpoints: latency-svc-dqpwr [4.652816992s]
Aug 22 19:54:03.518: INFO: Created: latency-svc-8twft
Aug 22 19:54:03.548: INFO: Got endpoints: latency-svc-8twft [3.905029403s]
Aug 22 19:54:03.655: INFO: Created: latency-svc-hhr4v
Aug 22 19:54:03.662: INFO: Got endpoints: latency-svc-hhr4v [3.975535996s]
Aug 22 19:54:03.725: INFO: Created: latency-svc-ndq7n
Aug 22 19:54:03.741: INFO: Got endpoints: latency-svc-ndq7n [3.888028627s]
Aug 22 19:54:03.810: INFO: Created: latency-svc-qt4s8
Aug 22 19:54:03.813: INFO: Got endpoints: latency-svc-qt4s8 [3.241233645s]
Aug 22 19:54:04.126: INFO: Created: latency-svc-snlmd
Aug 22 19:54:04.290: INFO: Got endpoints: latency-svc-snlmd [3.64988487s]
Aug 22 19:54:04.512: INFO: Created: latency-svc-dhrwj
Aug 22 19:54:04.520: INFO: Got endpoints: latency-svc-dhrwj [3.208952599s]
Aug 22 19:54:04.951: INFO: Created: latency-svc-72lcj
Aug 22 19:54:04.953: INFO: Got endpoints: latency-svc-72lcj [3.479279522s]
Aug 22 19:54:05.129: INFO: Created: latency-svc-6cvhh
Aug 22 19:54:05.156: INFO: Got endpoints: latency-svc-6cvhh [3.197644751s]
Aug 22 19:54:05.472: INFO: Created: latency-svc-6xsxp
Aug 22 19:54:05.674: INFO: Got endpoints: latency-svc-6xsxp [3.482861261s]
Aug 22 19:54:05.736: INFO: Created: latency-svc-2fqd4
Aug 22 19:54:05.900: INFO: Got endpoints: latency-svc-2fqd4 [3.676457592s]
Aug 22 19:54:05.915: INFO: Created: latency-svc-9ldbf
Aug 22 19:54:05.972: INFO: Got endpoints: latency-svc-9ldbf [3.600881759s]
Aug 22 19:54:06.181: INFO: Created: latency-svc-jj4wx
Aug 22 19:54:06.332: INFO: Got endpoints: latency-svc-jj4wx [3.538921142s]
Aug 22 19:54:06.366: INFO: Created: latency-svc-c6p8m
Aug 22 19:54:06.409: INFO: Got endpoints: latency-svc-c6p8m [3.550537398s]
Aug 22 19:54:06.494: INFO: Created: latency-svc-dqrmq
Aug 22 19:54:06.518: INFO: Got endpoints: latency-svc-dqrmq [3.437973356s]
Aug 22 19:54:06.565: INFO: Created: latency-svc-pg7nl
Aug 22 19:54:06.578: INFO: Got endpoints: latency-svc-pg7nl [3.101282649s]
Aug 22 19:54:06.661: INFO: Created: latency-svc-hkbcr
Aug 22 19:54:06.668: INFO: Got endpoints: latency-svc-hkbcr [3.119826453s]
Aug 22 19:54:06.689: INFO: Created: latency-svc-7wd2s
Aug 22 19:54:06.704: INFO: Got endpoints: latency-svc-7wd2s [3.041828737s]
Aug 22 19:54:06.726: INFO: Created: latency-svc-7msvc
Aug 22 19:54:06.750: INFO: Got endpoints: latency-svc-7msvc [3.009382746s]
Aug 22 19:54:06.835: INFO: Created: latency-svc-9jqcv
Aug 22 19:54:06.878: INFO: Got endpoints: latency-svc-9jqcv [3.065192659s]
Aug 22 19:54:07.002: INFO: Created: latency-svc-96kbf
Aug 22 19:54:07.006: INFO: Got endpoints: latency-svc-96kbf [2.716160778s]
Aug 22 19:54:07.073: INFO: Created: latency-svc-mfp72
Aug 22 19:54:07.146: INFO: Got endpoints: latency-svc-mfp72 [2.62596022s]
Aug 22 19:54:07.170: INFO: Created: latency-svc-tm57s
Aug 22 19:54:07.185: INFO: Got endpoints: latency-svc-tm57s [2.232386902s]
Aug 22 19:54:07.205: INFO: Created: latency-svc-6lm46
Aug 22 19:54:07.221: INFO: Got endpoints: latency-svc-6lm46 [2.064911914s]
Aug 22 19:54:07.241: INFO: Created: latency-svc-ljmrk
Aug 22 19:54:07.313: INFO: Got endpoints: latency-svc-ljmrk [1.638965057s]
Aug 22 19:54:07.327: INFO: Created: latency-svc-mjw62
Aug 22 19:54:07.342: INFO: Got endpoints: latency-svc-mjw62 [1.441342424s]
Aug 22 19:54:07.363: INFO: Created: latency-svc-6t4zx
Aug 22 19:54:07.372: INFO: Got endpoints: latency-svc-6t4zx [1.40034885s]
Aug 22 19:54:07.397: INFO: Created: latency-svc-l5czr
Aug 22 19:54:07.409: INFO: Got endpoints: latency-svc-l5czr [1.07635442s]
Aug 22 19:54:07.463: INFO: Created: latency-svc-tfsgl
Aug 22 19:54:07.475: INFO: Got endpoints: latency-svc-tfsgl [1.065863866s]
Aug 22 19:54:07.495: INFO: Created: latency-svc-t7vnp
Aug 22 19:54:07.505: INFO: Got endpoints: latency-svc-t7vnp [987.371106ms]
Aug 22 19:54:07.524: INFO: Created: latency-svc-58p9k
Aug 22 19:54:07.535: INFO: Got endpoints: latency-svc-58p9k [957.639318ms]
Aug 22 19:54:07.553: INFO: Created: latency-svc-tht9c
Aug 22 19:54:07.619: INFO: Got endpoints: latency-svc-tht9c [950.575579ms]
Aug 22 19:54:07.622: INFO: Created: latency-svc-z4jcw
Aug 22 19:54:07.643: INFO: Got endpoints: latency-svc-z4jcw [938.940047ms]
Aug 22 19:54:07.681: INFO: Created: latency-svc-2dl2k
Aug 22 19:54:07.686: INFO: Got endpoints: latency-svc-2dl2k [935.91748ms]
Aug 22 19:54:07.705: INFO: Created: latency-svc-nsc9k
Aug 22 19:54:07.716: INFO: Got endpoints: latency-svc-nsc9k [838.560046ms]
Aug 22 19:54:07.799: INFO: Created: latency-svc-nbdbp
Aug 22 19:54:07.802: INFO: Got endpoints: latency-svc-nbdbp [796.210959ms]
Aug 22 19:54:07.835: INFO: Created: latency-svc-jwdnh
Aug 22 19:54:07.850: INFO: Got endpoints: latency-svc-jwdnh [704.08306ms]
Aug 22 19:54:07.865: INFO: Created: latency-svc-c9dqq
Aug 22 19:54:07.892: INFO: Got endpoints: latency-svc-c9dqq [706.418216ms]
Aug 22 19:54:07.936: INFO: Created: latency-svc-xd78j
Aug 22 19:54:07.946: INFO: Got endpoints: latency-svc-xd78j [724.432469ms]
Aug 22 19:54:07.968: INFO: Created: latency-svc-8lfpf
Aug 22 19:54:07.983: INFO: Got endpoints: latency-svc-8lfpf [669.064371ms]
Aug 22 19:54:08.004: INFO: Created: latency-svc-rctns
Aug 22 19:54:08.019: INFO: Got endpoints: latency-svc-rctns [677.263688ms]
Aug 22 19:54:08.104: INFO: Created: latency-svc-p8gm2
Aug 22 19:54:08.109: INFO: Got endpoints: latency-svc-p8gm2 [736.924067ms]
Aug 22 19:54:08.161: INFO: Created: latency-svc-kmkrw
Aug 22 19:54:08.185: INFO: Got endpoints: latency-svc-kmkrw [775.978128ms]
Aug 22 19:54:08.251: INFO: Created: latency-svc-k99m4
Aug 22 19:54:08.263: INFO: Got endpoints: latency-svc-k99m4 [787.23651ms]
Aug 22 19:54:08.292: INFO: Created: latency-svc-rfwd4
Aug 22 19:54:08.305: INFO: Got endpoints: latency-svc-rfwd4 [799.534476ms]
Aug 22 19:54:08.326: INFO: Created: latency-svc-gghqk
Aug 22 19:54:08.433: INFO: Got endpoints: latency-svc-gghqk [897.989448ms]
Aug 22 19:54:08.436: INFO: Created: latency-svc-zrhnk
Aug 22 19:54:08.483: INFO: Got endpoints: latency-svc-zrhnk [863.844966ms]
Aug 22 19:54:08.513: INFO: Created: latency-svc-vhvtq
Aug 22 19:54:08.528: INFO: Got endpoints: latency-svc-vhvtq [885.264702ms]
Aug 22 19:54:08.583: INFO: Created: latency-svc-k48nb
Aug 22 19:54:08.611: INFO: Got endpoints: latency-svc-k48nb [924.579845ms]
Aug 22 19:54:08.611: INFO: Created: latency-svc-rt7pf
Aug 22 19:54:08.625: INFO: Got endpoints: latency-svc-rt7pf [908.140748ms]
Aug 22 19:54:08.657: INFO: Created: latency-svc-w4tpn
Aug 22 19:54:08.763: INFO: Got endpoints: latency-svc-w4tpn [960.834497ms]
Aug 22 19:54:08.765: INFO: Created: latency-svc-jmqs2
Aug 22 19:54:08.794: INFO: Got endpoints: latency-svc-jmqs2 [944.29335ms]
Aug 22 19:54:08.828: INFO: Created: latency-svc-gqnzq
Aug 22 19:54:08.847: INFO: Got endpoints: latency-svc-gqnzq [955.601383ms]
Aug 22 19:54:08.900: INFO: Created: latency-svc-tbvxt
Aug 22 19:54:08.913: INFO: Got endpoints: latency-svc-tbvxt [967.485351ms]
Aug 22 19:54:08.939: INFO: Created: latency-svc-k8nq8
Aug 22 19:54:08.956: INFO: Got endpoints: latency-svc-k8nq8 [972.89527ms]
Aug 22 19:54:09.050: INFO: Created: latency-svc-zjcmx
Aug 22 19:54:09.071: INFO: Got endpoints: latency-svc-zjcmx [1.051907345s]
Aug 22 19:54:09.107: INFO: Created: latency-svc-zp8mf
Aug 22 19:54:09.130: INFO: Got endpoints: latency-svc-zp8mf [1.021241367s]
Aug 22 19:54:09.206: INFO: Created: latency-svc-rqr86
Aug 22 19:54:09.211: INFO: Got endpoints: latency-svc-rqr86 [1.026776132s]
Aug 22 19:54:09.251: INFO: Created: latency-svc-tsx7d
Aug 22 19:54:09.263: INFO: Got endpoints: latency-svc-tsx7d [999.932399ms]
Aug 22 19:54:09.287: INFO: Created: latency-svc-j6bh7
Aug 22 19:54:09.299: INFO: Got endpoints: latency-svc-j6bh7 [994.116684ms]
Aug 22 19:54:09.350: INFO: Created: latency-svc-89hzk
Aug 22 19:54:09.365: INFO: Got endpoints: latency-svc-89hzk [931.277154ms]
Aug 22 19:54:09.390: INFO: Created: latency-svc-2hk74
Aug 22 19:54:09.402: INFO: Got endpoints: latency-svc-2hk74 [919.645387ms]
Aug 22 19:54:09.420: INFO: Created: latency-svc-7brk8
Aug 22 19:54:09.433: INFO: Got endpoints: latency-svc-7brk8 [904.196185ms]
Aug 22 19:54:09.500: INFO: Created: latency-svc-9nxfv
Aug 22 19:54:09.502: INFO: Got endpoints: latency-svc-9nxfv [890.963046ms]
Aug 22 19:54:09.539: INFO: Created: latency-svc-gz4rb
Aug 22 19:54:09.560: INFO: Got endpoints: latency-svc-gz4rb [934.985102ms]
Aug 22 19:54:09.615: INFO: Created: latency-svc-2ldm5
Aug 22 19:54:09.675: INFO: Got endpoints: latency-svc-2ldm5 [911.803246ms]
Aug 22 19:54:09.726: INFO: Created: latency-svc-82std
Aug 22 19:54:09.745: INFO: Got endpoints: latency-svc-82std [950.962934ms]
Aug 22 19:54:09.773: INFO: Created: latency-svc-ptqzf
Aug 22 19:54:09.870: INFO: Got endpoints: latency-svc-ptqzf [1.022845213s]
Aug 22 19:54:09.889: INFO: Created: latency-svc-wmpmg
Aug 22 19:54:09.902: INFO: Got endpoints: latency-svc-wmpmg [988.212407ms]
Aug 22 19:54:09.924: INFO: Created: latency-svc-wv5wf
Aug 22 19:54:09.944: INFO: Got endpoints: latency-svc-wv5wf [988.451181ms]
Aug 22 19:54:10.032: INFO: Created: latency-svc-887hm
Aug 22 19:54:10.034: INFO: Got endpoints: latency-svc-887hm [963.138775ms]
Aug 22 19:54:10.093: INFO: Created: latency-svc-szmxm
Aug 22 19:54:10.212: INFO: Got endpoints: latency-svc-szmxm [1.081419508s]
Aug 22 19:54:10.228: INFO: Created: latency-svc-88ldg
Aug 22 19:54:10.250: INFO: Got endpoints: latency-svc-88ldg [1.038682595s]
Aug 22 19:54:10.278: INFO: Created: latency-svc-pv5r4
Aug 22 19:54:10.296: INFO: Got endpoints: latency-svc-pv5r4 [1.032915873s]
Aug 22 19:54:10.539: INFO: Created: latency-svc-vhqfs
Aug 22 19:54:10.539: INFO: Got endpoints: latency-svc-vhqfs [1.240354446s]
Aug 22 19:54:11.069: INFO: Created: latency-svc-52xtz
Aug 22 19:54:11.111: INFO: Got endpoints: latency-svc-52xtz [1.746088908s]
Aug 22 19:54:11.225: INFO: Created: latency-svc-5qlvn
Aug 22 19:54:11.227: INFO: Got endpoints: latency-svc-5qlvn [1.82494014s]
Aug 22 19:54:11.699: INFO: Created: latency-svc-8bvvf
Aug 22 19:54:11.718: INFO: Got endpoints: latency-svc-8bvvf [2.285250548s]
Aug 22 19:54:11.739: INFO: Created: latency-svc-8kf2c
Aug 22 19:54:11.767: INFO: Created: latency-svc-sbwfq
Aug 22 19:54:11.768: INFO: Got endpoints: latency-svc-8kf2c [2.266129353s]
Aug 22 19:54:12.273: INFO: Got endpoints: latency-svc-sbwfq [2.712913587s]
Aug 22 19:54:12.275: INFO: Created: latency-svc-gsvcv
Aug 22 19:54:12.276: INFO: Got endpoints: latency-svc-gsvcv [2.600482253s]
Aug 22 19:54:12.583: INFO: Created: latency-svc-h9dd5
Aug 22 19:54:12.619: INFO: Got endpoints: latency-svc-h9dd5 [2.874359653s]
Aug 22 19:54:12.769: INFO: Created: latency-svc-zw9cw
Aug 22 19:54:12.810: INFO: Got endpoints: latency-svc-zw9cw [2.939597938s]
Aug 22 19:54:12.810: INFO: Created: latency-svc-4lzfd
Aug 22 19:54:12.839: INFO: Got endpoints: latency-svc-4lzfd [2.937116815s]
Aug 22 19:54:12.905: INFO: Created: latency-svc-hss4w
Aug 22 19:54:12.905: INFO: Got endpoints: latency-svc-hss4w [2.960965401s]
Aug 22 19:54:13.237: INFO: Created: latency-svc-h78nt
Aug 22 19:54:13.267: INFO: Got endpoints: latency-svc-h78nt [3.233126424s]
Aug 22 19:54:13.397: INFO: Created: latency-svc-pjlzq
Aug 22 19:54:13.411: INFO: Got endpoints: latency-svc-pjlzq [3.199396265s]
Aug 22 19:54:13.442: INFO: Created: latency-svc-tmwz6
Aug 22 19:54:13.454: INFO: Got endpoints: latency-svc-tmwz6 [3.203372191s]
Aug 22 19:54:13.471: INFO: Created: latency-svc-qvnwq
Aug 22 19:54:13.484: INFO: Got endpoints: latency-svc-qvnwq [3.188325149s]
Aug 22 19:54:13.547: INFO: Created: latency-svc-4d4xl
Aug 22 19:54:13.550: INFO: Got endpoints: latency-svc-4d4xl [3.010230373s]
Aug 22 19:54:13.602: INFO: Created: latency-svc-5kxbk
Aug 22 19:54:13.614: INFO: Got endpoints: latency-svc-5kxbk [2.503063425s]
Aug 22 19:54:13.634: INFO: Created: latency-svc-br4qc
Aug 22 19:54:13.696: INFO: Got endpoints: latency-svc-br4qc [2.469121118s]
Aug 22 19:54:13.699: INFO: Created: latency-svc-jnwlb
Aug 22 19:54:13.707: INFO: Got endpoints: latency-svc-jnwlb [1.988961147s]
Aug 22 19:54:13.737: INFO: Created: latency-svc-gc8v5
Aug 22 19:54:13.761: INFO: Got endpoints: latency-svc-gc8v5 [1.993354438s]
Aug 22 19:54:13.919: INFO: Created: latency-svc-w2bzx
Aug 22 19:54:13.921: INFO: Got endpoints: latency-svc-w2bzx [1.648724449s]
Aug 22 19:54:14.092: INFO: Created: latency-svc-cnpm6
Aug 22 19:54:14.095: INFO: Got endpoints: latency-svc-cnpm6 [1.818914629s]
Aug 22 19:54:14.124: INFO: Created: latency-svc-nn5kt
Aug 22 19:54:14.140: INFO: Got endpoints: latency-svc-nn5kt [1.520497373s]
Aug 22 19:54:14.167: INFO: Created: latency-svc-nktr2
Aug 22 19:54:14.176: INFO: Got endpoints: latency-svc-nktr2 [1.366540866s]
Aug 22 19:54:14.272: INFO: Created: latency-svc-6flf8
Aug 22 19:54:14.274: INFO: Got endpoints: latency-svc-6flf8 [1.435300533s]
Aug 22 19:54:14.336: INFO: Created: latency-svc-hvdd7
Aug 22 19:54:14.529: INFO: Got endpoints: latency-svc-hvdd7 [1.624453177s]
Aug 22 19:54:14.781: INFO: Created: latency-svc-5jqc2
Aug 22 19:54:15.026: INFO: Got endpoints: latency-svc-5jqc2 [1.758899846s]
Aug 22 19:54:15.031: INFO: Created: latency-svc-7dbhh
Aug 22 19:54:15.082: INFO: Got endpoints: latency-svc-7dbhh [1.671067832s]
Aug 22 19:54:15.182: INFO: Created: latency-svc-n47ss
Aug 22 19:54:15.262: INFO: Got endpoints: latency-svc-n47ss [1.808411683s]
Aug 22 19:54:15.319: INFO: Created: latency-svc-n7z2z
Aug 22 19:54:15.381: INFO: Created: latency-svc-tqdwd
Aug 22 19:54:15.382: INFO: Got endpoints: latency-svc-n7z2z [1.897576262s]
Aug 22 19:54:15.493: INFO: Got endpoints: latency-svc-tqdwd [1.943774923s]
Aug 22 19:54:15.543: INFO: Created: latency-svc-b7jlz
Aug 22 19:54:15.575: INFO: Got endpoints: latency-svc-b7jlz [1.961382839s]
Aug 22 19:54:15.649: INFO: Created: latency-svc-l89z6
Aug 22 19:54:15.671: INFO: Got endpoints: latency-svc-l89z6 [1.974382721s]
Aug 22 19:54:15.718: INFO: Created: latency-svc-vhr7t
Aug 22 19:54:15.799: INFO: Got endpoints: latency-svc-vhr7t [2.091646215s]
Aug 22 19:54:15.811: INFO: Created: latency-svc-2c7mg
Aug 22 19:54:15.839: INFO: Got endpoints: latency-svc-2c7mg [2.077876265s]
Aug 22 19:54:15.992: INFO: Created: latency-svc-j7gmn
Aug 22 19:54:16.026: INFO: Got endpoints: latency-svc-j7gmn [2.104215363s]
Aug 22 19:54:16.270: INFO: Created: latency-svc-7kmmx
Aug 22 19:54:16.439: INFO: Got endpoints: latency-svc-7kmmx [2.344226259s]
Aug 22 19:54:16.470: INFO: Created: latency-svc-7q988
Aug 22 19:54:16.494: INFO: Got endpoints: latency-svc-7q988 [2.354017379s]
Aug 22 19:54:16.507: INFO: Created: latency-svc-q5hd5
Aug 22 19:54:16.520: INFO: Got endpoints: latency-svc-q5hd5 [2.343350602s]
Aug 22 19:54:16.539: INFO: Created: latency-svc-gsrb7
Aug 22 19:54:16.607: INFO: Got endpoints: latency-svc-gsrb7 [2.332734258s]
Aug 22 19:54:16.615: INFO: Created: latency-svc-7t7nd
Aug 22 19:54:16.628: INFO: Got endpoints: latency-svc-7t7nd [2.098894262s]
Aug 22 19:54:16.658: INFO: Created: latency-svc-t48f9
Aug 22 19:54:16.684: INFO: Got endpoints: latency-svc-t48f9 [1.658011207s]
Aug 22 19:54:16.757: INFO: Created: latency-svc-4q6vq
Aug 22 19:54:16.767: INFO: Got endpoints: latency-svc-4q6vq [1.684473965s]
Aug 22 19:54:16.797: INFO: Created: latency-svc-kcpsf
Aug 22 19:54:16.827: INFO: Got endpoints: latency-svc-kcpsf [1.565265891s]
Aug 22 19:54:16.907: INFO: Created: latency-svc-kftk9
Aug 22 19:54:16.921: INFO: Got endpoints: latency-svc-kftk9 [1.538940927s]
Aug 22 19:54:16.976: INFO: Created: latency-svc-d8xzt
Aug 22 19:54:17.080: INFO: Got endpoints: latency-svc-d8xzt [1.586570473s]
Aug 22 19:54:17.083: INFO: Created: latency-svc-wgv2n
Aug 22 19:54:17.104: INFO: Got endpoints: latency-svc-wgv2n [1.528185663s]
Aug 22 19:54:17.134: INFO: Created: latency-svc-bzsdw
Aug 22 19:54:17.158: INFO: Got endpoints: latency-svc-bzsdw [1.486722359s]
Aug 22 19:54:17.506: INFO: Created: latency-svc-92gjv
Aug 22 19:54:17.509: INFO: Got endpoints: latency-svc-92gjv [1.710642377s]
Aug 22 19:54:17.552: INFO: Created: latency-svc-w7kvq
Aug 22 19:54:17.566: INFO: Got endpoints: latency-svc-w7kvq [1.727051664s]
Aug 22 19:54:17.583: INFO: Created: latency-svc-94czt
Aug 22 19:54:17.596: INFO: Got endpoints: latency-svc-94czt [1.570652594s]
Aug 22 19:54:17.682: INFO: Created: latency-svc-mf5sr
Aug 22 19:54:17.703: INFO: Got endpoints: latency-svc-mf5sr [1.264431868s]
Aug 22 19:54:17.735: INFO: Created: latency-svc-xsmsd
Aug 22 19:54:17.747: INFO: Got endpoints: latency-svc-xsmsd [1.253160462s]
Aug 22 19:54:17.762: INFO: Created: latency-svc-jlh4z
Aug 22 19:54:17.778: INFO: Got endpoints: latency-svc-jlh4z [1.25817537s]
Aug 22 19:54:17.832: INFO: Created: latency-svc-rv8hk
Aug 22 19:54:17.850: INFO: Got endpoints: latency-svc-rv8hk [1.242518936s]
Aug 22 19:54:17.902: INFO: Created: latency-svc-t4jmw
Aug 22 19:54:17.916: INFO: Got endpoints: latency-svc-t4jmw [1.287501124s]
Aug 22 19:54:17.984: INFO: Created: latency-svc-s47kp
Aug 22 19:54:18.000: INFO: Got endpoints: latency-svc-s47kp [1.315582841s]
Aug 22 19:54:18.020: INFO: Created: latency-svc-46cvl
Aug 22 19:54:18.037: INFO: Got endpoints: latency-svc-46cvl [1.27030105s]
Aug 22 19:54:18.110: INFO: Created: latency-svc-kjldm
Aug 22 19:54:18.114: INFO: Got endpoints: latency-svc-kjldm [1.286844248s]
Aug 22 19:54:18.144: INFO: Created: latency-svc-rq5tf
Aug 22 19:54:18.151: INFO: Got endpoints: latency-svc-rq5tf [1.23021483s]
Aug 22 19:54:18.187: INFO: Created: latency-svc-76l6l
Aug 22 19:54:18.290: INFO: Got endpoints: latency-svc-76l6l [1.209661619s]
Aug 22 19:54:18.293: INFO: Created: latency-svc-vmcr6
Aug 22 19:54:18.301: INFO: Got endpoints: latency-svc-vmcr6 [1.197162666s]
Aug 22 19:54:18.321: INFO: Created: latency-svc-dvbgs
Aug 22 19:54:18.338: INFO: Got endpoints: latency-svc-dvbgs [1.180161445s]
Aug 22 19:54:18.358: INFO: Created: latency-svc-jnvxm
Aug 22 19:54:18.368: INFO: Got endpoints: latency-svc-jnvxm [858.507429ms]
Aug 22 19:54:18.523: INFO: Created: latency-svc-btmgc
Aug 22 19:54:18.526: INFO: Got endpoints: latency-svc-btmgc [959.012999ms]
Aug 22 19:54:18.568: INFO: Created: latency-svc-bjbh5
Aug 22 19:54:18.584: INFO: Got endpoints: latency-svc-bjbh5 [987.959853ms]
Aug 22 19:54:18.615: INFO: Created: latency-svc-52zk5
Aug 22 19:54:18.685: INFO: Got endpoints: latency-svc-52zk5 [981.133984ms]
Aug 22 19:54:18.686: INFO: Created: latency-svc-vxv2n
Aug 22 19:54:18.706: INFO: Got endpoints: latency-svc-vxv2n [958.397691ms]
Aug 22 19:54:18.727: INFO: Created: latency-svc-jqv6s
Aug 22 19:54:18.742: INFO: Got endpoints: latency-svc-jqv6s [963.995814ms]
Aug 22 19:54:18.759: INFO: Created: latency-svc-tplnz
Aug 22 19:54:18.771: INFO: Got endpoints: latency-svc-tplnz [921.628647ms]
Aug 22 19:54:18.834: INFO: Created: latency-svc-2qnts
Aug 22 19:54:18.866: INFO: Got endpoints: latency-svc-2qnts [949.614613ms]
Aug 22 19:54:18.866: INFO: Created: latency-svc-ngt8q
Aug 22 19:54:18.896: INFO: Got endpoints: latency-svc-ngt8q [895.461892ms]
Aug 22 19:54:18.932: INFO: Created: latency-svc-kstql
Aug 22 19:54:19.014: INFO: Got endpoints: latency-svc-kstql [976.870468ms]
Aug 22 19:54:19.052: INFO: Created: latency-svc-z2brg
Aug 22 19:54:19.079: INFO: Got endpoints: latency-svc-z2brg [964.473904ms]
Aug 22 19:54:19.194: INFO: Created: latency-svc-6wxm5
Aug 22 19:54:19.197: INFO: Got endpoints: latency-svc-6wxm5 [1.045885141s]
Aug 22 19:54:19.250: INFO: Created: latency-svc-g2l87
Aug 22 19:54:19.277: INFO: Got endpoints: latency-svc-g2l87 [987.076132ms]
Aug 22 19:54:19.344: INFO: Created: latency-svc-pxvp9
Aug 22 19:54:19.370: INFO: Got endpoints: latency-svc-pxvp9 [1.068615334s]
Aug 22 19:54:19.371: INFO: Created: latency-svc-7dmqk
Aug 22 19:54:19.395: INFO: Got endpoints: latency-svc-7dmqk [1.057482746s]
Aug 22 19:54:19.420: INFO: Created: latency-svc-k7z2r
Aug 22 19:54:19.434: INFO: Got endpoints: latency-svc-k7z2r [1.065786839s]
Aug 22 19:54:19.481: INFO: Created: latency-svc-hc69r
Aug 22 19:54:19.494: INFO: Got endpoints: latency-svc-hc69r [968.64979ms]
Aug 22 19:54:19.514: INFO: Created: latency-svc-kpz6j
Aug 22 19:54:19.530: INFO: Got endpoints: latency-svc-kpz6j [945.572453ms]
Aug 22 19:54:19.561: INFO: Created: latency-svc-2k7tf
Aug 22 19:54:19.573: INFO: Got endpoints: latency-svc-2k7tf [887.944162ms]
Aug 22 19:54:19.637: INFO: Created: latency-svc-q4tls
Aug 22 19:54:19.640: INFO: Got endpoints: latency-svc-q4tls [934.312764ms]
Aug 22 19:54:19.671: INFO: Created: latency-svc-d8pj9
Aug 22 19:54:19.681: INFO: Got endpoints: latency-svc-d8pj9 [938.93125ms]
Aug 22 19:54:19.712: INFO: Created: latency-svc-qvcfg
Aug 22 19:54:19.724: INFO: Got endpoints: latency-svc-qvcfg [952.268551ms]
Aug 22 19:54:19.811: INFO: Created: latency-svc-f4bhw
Aug 22 19:54:19.815: INFO: Got endpoints: latency-svc-f4bhw [948.929309ms]
Aug 22 19:54:19.840: INFO: Created: latency-svc-mr6fs
Aug 22 19:54:19.857: INFO: Got endpoints: latency-svc-mr6fs [961.040478ms]
Aug 22 19:54:19.881: INFO: Created: latency-svc-pz9tv
Aug 22 19:54:19.899: INFO: Got endpoints: latency-svc-pz9tv [884.396012ms]
Aug 22 19:54:19.996: INFO: Created: latency-svc-tfhmq
Aug 22 19:54:19.999: INFO: Got endpoints: latency-svc-tfhmq [920.653977ms]
Aug 22 19:54:20.042: INFO: Created: latency-svc-fmwht
Aug 22 19:54:20.055: INFO: Got endpoints: latency-svc-fmwht [857.862161ms]
Aug 22 19:54:20.080: INFO: Created: latency-svc-t7bhn
Aug 22 19:54:20.091: INFO: Got endpoints: latency-svc-t7bhn [813.990263ms]
Aug 22 19:54:20.140: INFO: Created: latency-svc-2t8qd
Aug 22 19:54:20.157: INFO: Got endpoints: latency-svc-2t8qd [787.721941ms]
Aug 22 19:54:20.174: INFO: Created: latency-svc-p2lgz
Aug 22 19:54:20.187: INFO: Got endpoints: latency-svc-p2lgz [792.089576ms]
Aug 22 19:54:20.205: INFO: Created: latency-svc-qfsnv
Aug 22 19:54:20.224: INFO: Got endpoints: latency-svc-qfsnv [790.133559ms]
Aug 22 19:54:20.290: INFO: Created: latency-svc-wmg8q
Aug 22 19:54:20.293: INFO: Got endpoints: latency-svc-wmg8q [798.767012ms]
Aug 22 19:54:20.347: INFO: Created: latency-svc-wshz4
Aug 22 19:54:20.362: INFO: Got endpoints: latency-svc-wshz4 [832.180398ms]
Aug 22 19:54:20.383: INFO: Created: latency-svc-lggdd
Aug 22 19:54:20.451: INFO: Got endpoints: latency-svc-lggdd [878.701203ms]
Aug 22 19:54:20.454: INFO: Created: latency-svc-h8bxb
Aug 22 19:54:20.466: INFO: Got endpoints: latency-svc-h8bxb [825.768149ms]
Aug 22 19:54:20.487: INFO: Created: latency-svc-s9ffg
Aug 22 19:54:20.502: INFO: Got endpoints: latency-svc-s9ffg [820.628912ms]
Aug 22 19:54:20.523: INFO: Created: latency-svc-mxqrc
Aug 22 19:54:20.539: INFO: Got endpoints: latency-svc-mxqrc [815.116041ms]
Aug 22 19:54:20.601: INFO: Created: latency-svc-prlt7
Aug 22 19:54:20.604: INFO: Got endpoints: latency-svc-prlt7 [788.98721ms]
Aug 22 19:54:20.604: INFO: Latencies: [192.67725ms 331.500904ms 361.845364ms 488.981188ms 568.731654ms 669.064371ms 677.263688ms 704.08306ms 706.418216ms 724.432469ms 736.924067ms 775.978128ms 787.23651ms 787.721941ms 788.98721ms 790.133559ms 792.089576ms 796.210959ms 798.767012ms 799.534476ms 813.990263ms 815.116041ms 820.628912ms 825.768149ms 832.180398ms 838.560046ms 857.862161ms 858.507429ms 863.844966ms 878.701203ms 884.396012ms 885.264702ms 887.944162ms 890.963046ms 895.461892ms 897.989448ms 904.196185ms 908.140748ms 911.803246ms 919.645387ms 920.653977ms 921.628647ms 924.579845ms 931.277154ms 934.312764ms 934.985102ms 935.91748ms 938.93125ms 938.940047ms 944.29335ms 945.572453ms 948.929309ms 949.614613ms 950.575579ms 950.962934ms 952.268551ms 955.601383ms 957.639318ms 958.397691ms 959.012999ms 960.834497ms 961.040478ms 963.138775ms 963.995814ms 964.473904ms 967.485351ms 968.64979ms 972.89527ms 976.870468ms 981.133984ms 987.076132ms 987.371106ms 987.959853ms 988.212407ms 988.451181ms 994.116684ms 999.932399ms 1.021241367s 1.022845213s 1.026776132s 1.032915873s 1.038682595s 1.045885141s 1.051907345s 1.057482746s 1.065786839s 1.065863866s 1.068615334s 1.07635442s 1.081419508s 1.180161445s 1.197162666s 1.209661619s 1.23021483s 1.240354446s 1.242518936s 1.253160462s 1.25817537s 1.264431868s 1.27030105s 1.286844248s 1.287501124s 1.315582841s 1.366540866s 1.40034885s 1.435300533s 1.441342424s 1.486722359s 1.520497373s 1.528185663s 1.538940927s 1.561028176s 1.565265891s 1.570652594s 1.586570473s 1.624453177s 1.638965057s 1.648724449s 1.658011207s 1.671067832s 1.684473965s 1.710642377s 1.721672106s 1.727051664s 1.746088908s 1.758899846s 1.808411683s 1.818914629s 1.82494014s 1.897576262s 1.943774923s 1.961382839s 1.974382721s 1.988961147s 1.993354438s 2.064911914s 2.077876265s 2.091646215s 2.098894262s 2.104215363s 2.135427438s 2.232386902s 2.266129353s 2.285250548s 2.332734258s 2.343350602s 2.344226259s 2.354017379s 2.390107361s 2.469121118s 2.4797457s 2.503063425s 2.600482253s 2.62596022s 2.712913587s 2.716160778s 2.723175534s 2.874359653s 2.937116815s 2.939597938s 2.960965401s 3.009382746s 3.010230373s 3.041828737s 3.065192659s 3.101282649s 3.119826453s 3.188325149s 3.197644751s 3.199396265s 3.203372191s 3.208952599s 3.233126424s 3.241233645s 3.437973356s 3.479279522s 3.482861261s 3.538921142s 3.542557062s 3.550537398s 3.586061711s 3.600881759s 3.64988487s 3.676457592s 3.752281411s 3.888028627s 3.905029403s 3.975535996s 4.347102239s 4.368311049s 4.471165344s 4.499275964s 4.548341889s 4.557049239s 4.562465555s 4.652816992s 4.87847271s 5.010905193s 5.369193625s 5.522002306s]
Aug 22 19:54:20.604: INFO: 50 %ile: 1.286844248s
Aug 22 19:54:20.604: INFO: 90 %ile: 3.586061711s
Aug 22 19:54:20.604: INFO: 99 %ile: 5.369193625s
Aug 22 19:54:20.604: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:54:20.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4672" for this suite.

• [SLOW TEST:28.109 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":197,"skipped":3161,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:54:20.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-406a3dd2-86dc-4bb9-9d03-ad59a842593f
STEP: Creating a pod to test consume secrets
Aug 22 19:54:20.689: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e" in namespace "projected-8237" to be "success or failure"
Aug 22 19:54:20.693: INFO: Pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012211ms
Aug 22 19:54:22.697: INFO: Pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008169455s
Aug 22 19:54:24.702: INFO: Pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e": Phase="Running", Reason="", readiness=true. Elapsed: 4.012852611s
Aug 22 19:54:26.729: INFO: Pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039287845s
STEP: Saw pod success
Aug 22 19:54:26.729: INFO: Pod "pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e" satisfied condition "success or failure"
Aug 22 19:54:26.748: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e container projected-secret-volume-test: 
STEP: delete the pod
Aug 22 19:54:26.892: INFO: Waiting for pod pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e to disappear
Aug 22 19:54:26.898: INFO: Pod pod-projected-secrets-2b88785a-5375-4fdc-8b5d-20891304e86e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:54:26.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8237" for this suite.

• [SLOW TEST:6.308 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3171,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:54:26.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:54:28.152: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:54:31.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 19:54:33.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733722868, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:54:36.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:54:37.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3079" for this suite.
STEP: Destroying namespace "webhook-3079-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.658 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":199,"skipped":3185,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:54:37.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-f9c3b823-9908-44c6-88bc-872d6f636dda
STEP: Creating a pod to test consume configMaps
Aug 22 19:54:37.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15" in namespace "configmap-7662" to be "success or failure"
Aug 22 19:54:37.822: INFO: Pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093659ms
Aug 22 19:54:39.937: INFO: Pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118342798s
Aug 22 19:54:41.942: INFO: Pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1240815s
Aug 22 19:54:43.974: INFO: Pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15604762s
STEP: Saw pod success
Aug 22 19:54:43.974: INFO: Pod "pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15" satisfied condition "success or failure"
Aug 22 19:54:44.071: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15 container configmap-volume-test: 
STEP: delete the pod
Aug 22 19:54:44.164: INFO: Waiting for pod pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15 to disappear
Aug 22 19:54:44.168: INFO: Pod pod-configmaps-50703181-d31a-4b0a-83fa-0089e305aa15 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:54:44.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7662" for this suite.

• [SLOW TEST:6.610 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3186,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:54:44.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:54:44.524: INFO: Create a RollingUpdate DaemonSet
Aug 22 19:54:44.527: INFO: Check that daemon pods launch on every node of the cluster
Aug 22 19:54:44.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:44.569: INFO: Number of nodes with available pods: 0
Aug 22 19:54:44.569: INFO: Node jerma-worker is running more than one daemon pod
Aug 22 19:54:45.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:45.842: INFO: Number of nodes with available pods: 0
Aug 22 19:54:45.842: INFO: Node jerma-worker is running more than one daemon pod
Aug 22 19:54:46.622: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:46.805: INFO: Number of nodes with available pods: 0
Aug 22 19:54:46.805: INFO: Node jerma-worker is running more than one daemon pod
Aug 22 19:54:47.579: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:47.812: INFO: Number of nodes with available pods: 0
Aug 22 19:54:47.812: INFO: Node jerma-worker is running more than one daemon pod
Aug 22 19:54:49.113: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:49.668: INFO: Number of nodes with available pods: 0
Aug 22 19:54:49.668: INFO: Node jerma-worker is running more than one daemon pod
Aug 22 19:54:50.901: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:51.058: INFO: Number of nodes with available pods: 1
Aug 22 19:54:51.058: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 19:54:51.598: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:51.613: INFO: Number of nodes with available pods: 1
Aug 22 19:54:51.613: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 19:54:52.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:52.633: INFO: Number of nodes with available pods: 2
Aug 22 19:54:52.633: INFO: Number of running nodes: 2, number of available pods: 2
Aug 22 19:54:52.633: INFO: Update the DaemonSet to trigger a rollout
Aug 22 19:54:52.667: INFO: Updating DaemonSet daemon-set
Aug 22 19:54:55.793: INFO: Roll back the DaemonSet before rollout is complete
Aug 22 19:54:55.833: INFO: Updating DaemonSet daemon-set
Aug 22 19:54:55.833: INFO: Make sure DaemonSet rollback is complete
Aug 22 19:54:55.847: INFO: Wrong image for pod: daemon-set-7dp9w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 22 19:54:55.847: INFO: Pod daemon-set-7dp9w is not available
Aug 22 19:54:55.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:56.951: INFO: Wrong image for pod: daemon-set-7dp9w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 22 19:54:56.951: INFO: Pod daemon-set-7dp9w is not available
Aug 22 19:54:56.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:58.303: INFO: Wrong image for pod: daemon-set-7dp9w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 22 19:54:58.303: INFO: Pod daemon-set-7dp9w is not available
Aug 22 19:54:58.308: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 22 19:54:58.901: INFO: Pod daemon-set-dg8wf is not available
Aug 22 19:54:58.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6488, will wait for the garbage collector to delete the pods
Aug 22 19:54:59.091: INFO: Deleting DaemonSet.extensions daemon-set took: 15.63511ms
Aug 22 19:54:59.391: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.191331ms
Aug 22 19:55:02.303: INFO: Number of nodes with available pods: 0
Aug 22 19:55:02.303: INFO: Number of running nodes: 0, number of available pods: 0
Aug 22 19:55:02.338: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6488/daemonsets","resourceVersion":"2556799"},"items":null}

Aug 22 19:55:02.342: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6488/pods","resourceVersion":"2556799"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:02.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6488" for this suite.

• [SLOW TEST:18.211 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":201,"skipped":3193,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:02.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:14.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4766" for this suite.

• [SLOW TEST:11.647 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":202,"skipped":3202,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:14.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:14.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1594" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":203,"skipped":3210,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:14.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-6221
STEP: creating replication controller nodeport-test in namespace services-6221
I0822 19:55:14.475072       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6221, replica count: 2
I0822 19:55:17.525571       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:55:20.525847       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:55:23.526086       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 19:55:23.526: INFO: Creating new exec pod
Aug 22 19:55:30.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6221 execpod5ct2h -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 22 19:55:30.828: INFO: stderr: "I0822 19:55:30.755999    3152 log.go:172] (0xc000b40000) (0xc000af4000) Create stream\nI0822 19:55:30.756078    3152 log.go:172] (0xc000b40000) (0xc000af4000) Stream added, broadcasting: 1\nI0822 19:55:30.759819    3152 log.go:172] (0xc000b40000) Reply frame received for 1\nI0822 19:55:30.759845    3152 log.go:172] (0xc000b40000) (0xc000af40a0) Create stream\nI0822 19:55:30.759853    3152 log.go:172] (0xc000b40000) (0xc000af40a0) Stream added, broadcasting: 3\nI0822 19:55:30.760550    3152 log.go:172] (0xc000b40000) Reply frame received for 3\nI0822 19:55:30.760579    3152 log.go:172] (0xc000b40000) (0xc000af4140) Create stream\nI0822 19:55:30.760587    3152 log.go:172] (0xc000b40000) (0xc000af4140) Stream added, broadcasting: 5\nI0822 19:55:30.761237    3152 log.go:172] (0xc000b40000) Reply frame received for 5\nI0822 19:55:30.817989    3152 log.go:172] (0xc000b40000) Data frame received for 5\nI0822 19:55:30.818044    3152 log.go:172] (0xc000af4140) (5) Data frame handling\nI0822 19:55:30.818055    3152 log.go:172] (0xc000af4140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0822 19:55:30.818779    3152 log.go:172] (0xc000b40000) Data frame received for 5\nI0822 19:55:30.818793    3152 log.go:172] (0xc000af4140) (5) Data frame handling\nI0822 19:55:30.818807    3152 log.go:172] (0xc000af4140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0822 19:55:30.819311    3152 log.go:172] (0xc000b40000) Data frame received for 5\nI0822 19:55:30.819347    3152 log.go:172] (0xc000af4140) (5) Data frame handling\nI0822 19:55:30.819388    3152 log.go:172] (0xc000b40000) Data frame received for 3\nI0822 19:55:30.819407    3152 log.go:172] (0xc000af40a0) (3) Data frame handling\nI0822 19:55:30.821104    3152 log.go:172] (0xc000b40000) Data frame received for 1\nI0822 19:55:30.821122    3152 log.go:172] (0xc000af4000) (1) Data frame handling\nI0822 19:55:30.821130    3152 log.go:172] (0xc000af4000) (1) Data frame sent\nI0822 19:55:30.821138    3152 log.go:172] (0xc000b40000) (0xc000af4000) Stream removed, broadcasting: 1\nI0822 19:55:30.821154    3152 log.go:172] (0xc000b40000) Go away received\nI0822 19:55:30.821471    3152 log.go:172] (0xc000b40000) (0xc000af4000) Stream removed, broadcasting: 1\nI0822 19:55:30.821487    3152 log.go:172] (0xc000b40000) (0xc000af40a0) Stream removed, broadcasting: 3\nI0822 19:55:30.821493    3152 log.go:172] (0xc000b40000) (0xc000af4140) Stream removed, broadcasting: 5\n"
Aug 22 19:55:30.828: INFO: stdout: ""
Aug 22 19:55:30.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6221 execpod5ct2h -- /bin/sh -x -c nc -zv -t -w 2 10.105.226.131 80'
Aug 22 19:55:31.028: INFO: stderr: "I0822 19:55:30.951898    3172 log.go:172] (0xc00099f130) (0xc0009fc5a0) Create stream\nI0822 19:55:30.951957    3172 log.go:172] (0xc00099f130) (0xc0009fc5a0) Stream added, broadcasting: 1\nI0822 19:55:30.955830    3172 log.go:172] (0xc00099f130) Reply frame received for 1\nI0822 19:55:30.955890    3172 log.go:172] (0xc00099f130) (0xc0005da6e0) Create stream\nI0822 19:55:30.955911    3172 log.go:172] (0xc00099f130) (0xc0005da6e0) Stream added, broadcasting: 3\nI0822 19:55:30.956960    3172 log.go:172] (0xc00099f130) Reply frame received for 3\nI0822 19:55:30.956994    3172 log.go:172] (0xc00099f130) (0xc0002eb4a0) Create stream\nI0822 19:55:30.957004    3172 log.go:172] (0xc00099f130) (0xc0002eb4a0) Stream added, broadcasting: 5\nI0822 19:55:30.957983    3172 log.go:172] (0xc00099f130) Reply frame received for 5\nI0822 19:55:31.020947    3172 log.go:172] (0xc00099f130) Data frame received for 5\nI0822 19:55:31.020998    3172 log.go:172] (0xc0002eb4a0) (5) Data frame handling\nI0822 19:55:31.021017    3172 log.go:172] (0xc0002eb4a0) (5) Data frame sent\nI0822 19:55:31.021031    3172 log.go:172] (0xc00099f130) Data frame received for 5\nI0822 19:55:31.021043    3172 log.go:172] (0xc0002eb4a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.226.131 80\nConnection to 10.105.226.131 80 port [tcp/http] succeeded!\nI0822 19:55:31.021083    3172 log.go:172] (0xc00099f130) Data frame received for 3\nI0822 19:55:31.021111    3172 log.go:172] (0xc0005da6e0) (3) Data frame handling\nI0822 19:55:31.022309    3172 log.go:172] (0xc00099f130) Data frame received for 1\nI0822 19:55:31.022322    3172 log.go:172] (0xc0009fc5a0) (1) Data frame handling\nI0822 19:55:31.022352    3172 log.go:172] (0xc0009fc5a0) (1) Data frame sent\nI0822 19:55:31.022374    3172 log.go:172] (0xc00099f130) (0xc0009fc5a0) Stream removed, broadcasting: 1\nI0822 19:55:31.022438    3172 log.go:172] (0xc00099f130) Go away received\nI0822 19:55:31.022684    3172 log.go:172] (0xc00099f130) (0xc0009fc5a0) Stream removed, broadcasting: 1\nI0822 19:55:31.022698    3172 log.go:172] (0xc00099f130) (0xc0005da6e0) Stream removed, broadcasting: 3\nI0822 19:55:31.022705    3172 log.go:172] (0xc00099f130) (0xc0002eb4a0) Stream removed, broadcasting: 5\n"
Aug 22 19:55:31.028: INFO: stdout: ""
Aug 22 19:55:31.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6221 execpod5ct2h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31675'
Aug 22 19:55:31.244: INFO: stderr: "I0822 19:55:31.159773    3192 log.go:172] (0xc000102a50) (0xc0006cfa40) Create stream\nI0822 19:55:31.159828    3192 log.go:172] (0xc000102a50) (0xc0006cfa40) Stream added, broadcasting: 1\nI0822 19:55:31.162272    3192 log.go:172] (0xc000102a50) Reply frame received for 1\nI0822 19:55:31.162320    3192 log.go:172] (0xc000102a50) (0xc0006cfc20) Create stream\nI0822 19:55:31.162334    3192 log.go:172] (0xc000102a50) (0xc0006cfc20) Stream added, broadcasting: 3\nI0822 19:55:31.163213    3192 log.go:172] (0xc000102a50) Reply frame received for 3\nI0822 19:55:31.163259    3192 log.go:172] (0xc000102a50) (0xc00093e000) Create stream\nI0822 19:55:31.163288    3192 log.go:172] (0xc000102a50) (0xc00093e000) Stream added, broadcasting: 5\nI0822 19:55:31.164196    3192 log.go:172] (0xc000102a50) Reply frame received for 5\nI0822 19:55:31.235873    3192 log.go:172] (0xc000102a50) Data frame received for 3\nI0822 19:55:31.235894    3192 log.go:172] (0xc0006cfc20) (3) Data frame handling\nI0822 19:55:31.235925    3192 log.go:172] (0xc000102a50) Data frame received for 5\nI0822 19:55:31.235959    3192 log.go:172] (0xc00093e000) (5) Data frame handling\nI0822 19:55:31.235984    3192 log.go:172] (0xc00093e000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31675\nConnection to 172.18.0.6 31675 port [tcp/31675] succeeded!\nI0822 19:55:31.236083    3192 log.go:172] (0xc000102a50) Data frame received for 5\nI0822 19:55:31.236099    3192 log.go:172] (0xc00093e000) (5) Data frame handling\nI0822 19:55:31.237749    3192 log.go:172] (0xc000102a50) Data frame received for 1\nI0822 19:55:31.237763    3192 log.go:172] (0xc0006cfa40) (1) Data frame handling\nI0822 19:55:31.237769    3192 log.go:172] (0xc0006cfa40) (1) Data frame sent\nI0822 19:55:31.237777    3192 log.go:172] (0xc000102a50) (0xc0006cfa40) Stream removed, broadcasting: 1\nI0822 19:55:31.237798    3192 log.go:172] (0xc000102a50) Go away received\nI0822 19:55:31.238169    3192 log.go:172] (0xc000102a50) (0xc0006cfa40) Stream removed, broadcasting: 1\nI0822 19:55:31.238209    3192 log.go:172] (0xc000102a50) (0xc0006cfc20) Stream removed, broadcasting: 3\nI0822 19:55:31.238223    3192 log.go:172] (0xc000102a50) (0xc00093e000) Stream removed, broadcasting: 5\n"
Aug 22 19:55:31.245: INFO: stdout: ""
Aug 22 19:55:31.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6221 execpod5ct2h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31675'
Aug 22 19:55:31.433: INFO: stderr: "I0822 19:55:31.367640    3213 log.go:172] (0xc0005f2790) (0xc0005ea280) Create stream\nI0822 19:55:31.367699    3213 log.go:172] (0xc0005f2790) (0xc0005ea280) Stream added, broadcasting: 1\nI0822 19:55:31.369595    3213 log.go:172] (0xc0005f2790) Reply frame received for 1\nI0822 19:55:31.369621    3213 log.go:172] (0xc0005f2790) (0xc0005ea320) Create stream\nI0822 19:55:31.369628    3213 log.go:172] (0xc0005f2790) (0xc0005ea320) Stream added, broadcasting: 3\nI0822 19:55:31.370347    3213 log.go:172] (0xc0005f2790) Reply frame received for 3\nI0822 19:55:31.370398    3213 log.go:172] (0xc0005f2790) (0xc0005e86e0) Create stream\nI0822 19:55:31.370423    3213 log.go:172] (0xc0005f2790) (0xc0005e86e0) Stream added, broadcasting: 5\nI0822 19:55:31.371178    3213 log.go:172] (0xc0005f2790) Reply frame received for 5\nI0822 19:55:31.426272    3213 log.go:172] (0xc0005f2790) Data frame received for 5\nI0822 19:55:31.426307    3213 log.go:172] (0xc0005e86e0) (5) Data frame handling\nI0822 19:55:31.426319    3213 log.go:172] (0xc0005e86e0) (5) Data frame sent\nI0822 19:55:31.426326    3213 log.go:172] (0xc0005f2790) Data frame received for 5\nI0822 19:55:31.426332    3213 log.go:172] (0xc0005e86e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 31675\nConnection to 172.18.0.3 31675 port [tcp/31675] succeeded!\nI0822 19:55:31.426354    3213 log.go:172] (0xc0005f2790) Data frame received for 3\nI0822 19:55:31.426364    3213 log.go:172] (0xc0005ea320) (3) Data frame handling\nI0822 19:55:31.427460    3213 log.go:172] (0xc0005f2790) Data frame received for 1\nI0822 19:55:31.427514    3213 log.go:172] (0xc0005ea280) (1) Data frame handling\nI0822 19:55:31.427551    3213 log.go:172] (0xc0005ea280) (1) Data frame sent\nI0822 19:55:31.427600    3213 log.go:172] (0xc0005f2790) (0xc0005ea280) Stream removed, broadcasting: 1\nI0822 19:55:31.427659    3213 log.go:172] (0xc0005f2790) Go away received\nI0822 19:55:31.427889    3213 log.go:172] (0xc0005f2790) (0xc0005ea280) Stream removed, broadcasting: 1\nI0822 19:55:31.427907    3213 log.go:172] (0xc0005f2790) (0xc0005ea320) Stream removed, broadcasting: 3\nI0822 19:55:31.427916    3213 log.go:172] (0xc0005f2790) (0xc0005e86e0) Stream removed, broadcasting: 5\n"
Aug 22 19:55:31.434: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:31.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6221" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.191 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":204,"skipped":3268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:31.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:38.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6100" for this suite.

• [SLOW TEST:7.521 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3304,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:38.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 22 19:55:44.213: INFO: Successfully updated pod "labelsupdate3bdac4d2-313f-4165-8562-5cd735e17815"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:48.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8682" for this suite.

• [SLOW TEST:9.494 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3308,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:48.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-1fabc0a1-d0ef-4172-a79d-c46259f9e6a4
STEP: Creating a pod to test consume secrets
Aug 22 19:55:48.993: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d" in namespace "projected-6771" to be "success or failure"
Aug 22 19:55:49.023: INFO: Pod "pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.763568ms
Aug 22 19:55:51.314: INFO: Pod "pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321419098s
Aug 22 19:55:53.318: INFO: Pod "pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325318671s
STEP: Saw pod success
Aug 22 19:55:53.318: INFO: Pod "pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d" satisfied condition "success or failure"
Aug 22 19:55:53.321: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d container projected-secret-volume-test: 
STEP: delete the pod
Aug 22 19:55:53.352: INFO: Waiting for pod pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d to disappear
Aug 22 19:55:53.380: INFO: Pod pod-projected-secrets-2390b272-1eab-451e-a07a-55dc0d042e7d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:55:53.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6771" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3322,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:55:53.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:55:53.772: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4" in namespace "security-context-test-7054" to be "success or failure"
Aug 22 19:55:53.836: INFO: Pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 63.914266ms
Aug 22 19:55:55.839: INFO: Pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067427567s
Aug 22 19:55:58.045: INFO: Pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273491361s
Aug 22 19:56:00.049: INFO: Pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276847853s
Aug 22 19:56:00.049: INFO: Pod "alpine-nnp-false-fe2bc271-729a-4495-958f-e7a4782ec9e4" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:56:00.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7054" for this suite.

• [SLOW TEST:6.913 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3331,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:56:00.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:56:12.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8380" for this suite.

• [SLOW TEST:11.791 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":209,"skipped":3349,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:56:12.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 22 19:56:12.184: INFO: Created pod &Pod{ObjectMeta:{dns-2479  dns-2479 /api/v1/namespaces/dns-2479/pods/dns-2479 c9efbfec-1fb3-43a9-9415-de5c1d1549bf 2557385 0 2020-08-22 19:56:12 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jkfht,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jkfht,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jkfht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 22 19:56:16.191: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2479 PodName:dns-2479 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:56:16.191: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:56:16.225721       6 log.go:172] (0xc002796420) (0xc001bb23c0) Create stream
I0822 19:56:16.225751       6 log.go:172] (0xc002796420) (0xc001bb23c0) Stream added, broadcasting: 1
I0822 19:56:16.227855       6 log.go:172] (0xc002796420) Reply frame received for 1
I0822 19:56:16.227903       6 log.go:172] (0xc002796420) (0xc00200a000) Create stream
I0822 19:56:16.227919       6 log.go:172] (0xc002796420) (0xc00200a000) Stream added, broadcasting: 3
I0822 19:56:16.230433       6 log.go:172] (0xc002796420) Reply frame received for 3
I0822 19:56:16.230473       6 log.go:172] (0xc002796420) (0xc0020034a0) Create stream
I0822 19:56:16.230489       6 log.go:172] (0xc002796420) (0xc0020034a0) Stream added, broadcasting: 5
I0822 19:56:16.231410       6 log.go:172] (0xc002796420) Reply frame received for 5
I0822 19:56:16.290940       6 log.go:172] (0xc002796420) Data frame received for 3
I0822 19:56:16.290979       6 log.go:172] (0xc00200a000) (3) Data frame handling
I0822 19:56:16.291006       6 log.go:172] (0xc00200a000) (3) Data frame sent
I0822 19:56:16.294276       6 log.go:172] (0xc002796420) Data frame received for 3
I0822 19:56:16.294322       6 log.go:172] (0xc00200a000) (3) Data frame handling
I0822 19:56:16.294356       6 log.go:172] (0xc002796420) Data frame received for 5
I0822 19:56:16.294377       6 log.go:172] (0xc0020034a0) (5) Data frame handling
I0822 19:56:16.296069       6 log.go:172] (0xc002796420) Data frame received for 1
I0822 19:56:16.296093       6 log.go:172] (0xc001bb23c0) (1) Data frame handling
I0822 19:56:16.296110       6 log.go:172] (0xc001bb23c0) (1) Data frame sent
I0822 19:56:16.296135       6 log.go:172] (0xc002796420) (0xc001bb23c0) Stream removed, broadcasting: 1
I0822 19:56:16.296149       6 log.go:172] (0xc002796420) Go away received
I0822 19:56:16.296326       6 log.go:172] (0xc002796420) (0xc001bb23c0) Stream removed, broadcasting: 1
I0822 19:56:16.296355       6 log.go:172] (0xc002796420) (0xc00200a000) Stream removed, broadcasting: 3
I0822 19:56:16.296382       6 log.go:172] (0xc002796420) (0xc0020034a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 22 19:56:16.296: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2479 PodName:dns-2479 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 19:56:16.296: INFO: >>> kubeConfig: /root/.kube/config
I0822 19:56:16.331011       6 log.go:172] (0xc002796a50) (0xc001bb2780) Create stream
I0822 19:56:16.331036       6 log.go:172] (0xc002796a50) (0xc001bb2780) Stream added, broadcasting: 1
I0822 19:56:16.333318       6 log.go:172] (0xc002796a50) Reply frame received for 1
I0822 19:56:16.333383       6 log.go:172] (0xc002796a50) (0xc00200a0a0) Create stream
I0822 19:56:16.333406       6 log.go:172] (0xc002796a50) (0xc00200a0a0) Stream added, broadcasting: 3
I0822 19:56:16.334535       6 log.go:172] (0xc002796a50) Reply frame received for 3
I0822 19:56:16.334588       6 log.go:172] (0xc002796a50) (0xc00200a140) Create stream
I0822 19:56:16.334609       6 log.go:172] (0xc002796a50) (0xc00200a140) Stream added, broadcasting: 5
I0822 19:56:16.335579       6 log.go:172] (0xc002796a50) Reply frame received for 5
I0822 19:56:16.407331       6 log.go:172] (0xc002796a50) Data frame received for 3
I0822 19:56:16.407361       6 log.go:172] (0xc00200a0a0) (3) Data frame handling
I0822 19:56:16.407378       6 log.go:172] (0xc00200a0a0) (3) Data frame sent
I0822 19:56:16.410099       6 log.go:172] (0xc002796a50) Data frame received for 3
I0822 19:56:16.410163       6 log.go:172] (0xc00200a0a0) (3) Data frame handling
I0822 19:56:16.410194       6 log.go:172] (0xc002796a50) Data frame received for 5
I0822 19:56:16.410212       6 log.go:172] (0xc00200a140) (5) Data frame handling
I0822 19:56:16.415133       6 log.go:172] (0xc002796a50) Data frame received for 1
I0822 19:56:16.415171       6 log.go:172] (0xc001bb2780) (1) Data frame handling
I0822 19:56:16.415190       6 log.go:172] (0xc001bb2780) (1) Data frame sent
I0822 19:56:16.415246       6 log.go:172] (0xc002796a50) (0xc001bb2780) Stream removed, broadcasting: 1
I0822 19:56:16.415406       6 log.go:172] (0xc002796a50) (0xc001bb2780) Stream removed, broadcasting: 1
I0822 19:56:16.415442       6 log.go:172] (0xc002796a50) (0xc00200a0a0) Stream removed, broadcasting: 3
I0822 19:56:16.415492       6 log.go:172] (0xc002796a50) (0xc00200a140) Stream removed, broadcasting: 5
Aug 22 19:56:16.415: INFO: Deleting pod dns-2479...
I0822 19:56:16.415596       6 log.go:172] (0xc002796a50) Go away received
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:56:16.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2479" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":210,"skipped":3358,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:56:16.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8593
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-8593
I0822 19:56:17.022120       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8593, replica count: 2
I0822 19:56:20.072596       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:56:23.072930       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 19:56:26.073217       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 19:56:26.073: INFO: Creating new exec pod
Aug 22 19:56:31.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8593 execpod9plsg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 22 19:56:31.493: INFO: stderr: "I0822 19:56:31.431359    3232 log.go:172] (0xc0009da8f0) (0xc0009a0000) Create stream\nI0822 19:56:31.431417    3232 log.go:172] (0xc0009da8f0) (0xc0009a0000) Stream added, broadcasting: 1\nI0822 19:56:31.434183    3232 log.go:172] (0xc0009da8f0) Reply frame received for 1\nI0822 19:56:31.434261    3232 log.go:172] (0xc0009da8f0) (0xc00069f9a0) Create stream\nI0822 19:56:31.434293    3232 log.go:172] (0xc0009da8f0) (0xc00069f9a0) Stream added, broadcasting: 3\nI0822 19:56:31.435363    3232 log.go:172] (0xc0009da8f0) Reply frame received for 3\nI0822 19:56:31.435417    3232 log.go:172] (0xc0009da8f0) (0xc000328000) Create stream\nI0822 19:56:31.435434    3232 log.go:172] (0xc0009da8f0) (0xc000328000) Stream added, broadcasting: 5\nI0822 19:56:31.436460    3232 log.go:172] (0xc0009da8f0) Reply frame received for 5\nI0822 19:56:31.483769    3232 log.go:172] (0xc0009da8f0) Data frame received for 5\nI0822 19:56:31.483806    3232 log.go:172] (0xc000328000) (5) Data frame handling\nI0822 19:56:31.483849    3232 log.go:172] (0xc000328000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0822 19:56:31.484128    3232 log.go:172] (0xc0009da8f0) Data frame received for 5\nI0822 19:56:31.484152    3232 log.go:172] (0xc000328000) (5) Data frame handling\nI0822 19:56:31.484168    3232 log.go:172] (0xc000328000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0822 19:56:31.484435    3232 log.go:172] (0xc0009da8f0) Data frame received for 3\nI0822 19:56:31.484466    3232 log.go:172] (0xc00069f9a0) (3) Data frame handling\nI0822 19:56:31.484494    3232 log.go:172] (0xc0009da8f0) Data frame received for 5\nI0822 19:56:31.484508    3232 log.go:172] (0xc000328000) (5) Data frame handling\nI0822 19:56:31.486419    3232 log.go:172] (0xc0009da8f0) Data frame received for 1\nI0822 19:56:31.486434    3232 log.go:172] (0xc0009a0000) (1) Data frame handling\nI0822 19:56:31.486442    3232 log.go:172] (0xc0009a0000) (1) Data frame sent\nI0822 19:56:31.486450    3232 log.go:172] (0xc0009da8f0) (0xc0009a0000) Stream removed, broadcasting: 1\nI0822 19:56:31.486463    3232 log.go:172] (0xc0009da8f0) Go away received\nI0822 19:56:31.486879    3232 log.go:172] (0xc0009da8f0) (0xc0009a0000) Stream removed, broadcasting: 1\nI0822 19:56:31.486899    3232 log.go:172] (0xc0009da8f0) (0xc00069f9a0) Stream removed, broadcasting: 3\nI0822 19:56:31.486910    3232 log.go:172] (0xc0009da8f0) (0xc000328000) Stream removed, broadcasting: 5\n"
Aug 22 19:56:31.493: INFO: stdout: ""
Aug 22 19:56:31.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8593 execpod9plsg -- /bin/sh -x -c nc -zv -t -w 2 10.99.229.17 80'
Aug 22 19:56:31.725: INFO: stderr: "I0822 19:56:31.647873    3254 log.go:172] (0xc000a93970) (0xc0009125a0) Create stream\nI0822 19:56:31.647924    3254 log.go:172] (0xc000a93970) (0xc0009125a0) Stream added, broadcasting: 1\nI0822 19:56:31.651674    3254 log.go:172] (0xc000a93970) Reply frame received for 1\nI0822 19:56:31.651720    3254 log.go:172] (0xc000a93970) (0xc000671d60) Create stream\nI0822 19:56:31.651733    3254 log.go:172] (0xc000a93970) (0xc000671d60) Stream added, broadcasting: 3\nI0822 19:56:31.653022    3254 log.go:172] (0xc000a93970) Reply frame received for 3\nI0822 19:56:31.653068    3254 log.go:172] (0xc000a93970) (0xc0005ae960) Create stream\nI0822 19:56:31.653086    3254 log.go:172] (0xc000a93970) (0xc0005ae960) Stream added, broadcasting: 5\nI0822 19:56:31.654085    3254 log.go:172] (0xc000a93970) Reply frame received for 5\nI0822 19:56:31.718403    3254 log.go:172] (0xc000a93970) Data frame received for 3\nI0822 19:56:31.718442    3254 log.go:172] (0xc000671d60) (3) Data frame handling\nI0822 19:56:31.718463    3254 log.go:172] (0xc000a93970) Data frame received for 5\nI0822 19:56:31.718472    3254 log.go:172] (0xc0005ae960) (5) Data frame handling\nI0822 19:56:31.718486    3254 log.go:172] (0xc0005ae960) (5) Data frame sent\nI0822 19:56:31.718496    3254 log.go:172] (0xc000a93970) Data frame received for 5\nI0822 19:56:31.718507    3254 log.go:172] (0xc0005ae960) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.229.17 80\nConnection to 10.99.229.17 80 port [tcp/http] succeeded!\nI0822 19:56:31.719650    3254 log.go:172] (0xc000a93970) Data frame received for 1\nI0822 19:56:31.719674    3254 log.go:172] (0xc0009125a0) (1) Data frame handling\nI0822 19:56:31.719687    3254 log.go:172] (0xc0009125a0) (1) Data frame sent\nI0822 19:56:31.719701    3254 log.go:172] (0xc000a93970) (0xc0009125a0) Stream removed, broadcasting: 1\nI0822 19:56:31.719882    3254 log.go:172] (0xc000a93970) Go away received\nI0822 19:56:31.720078    3254 log.go:172] (0xc000a93970) (0xc0009125a0) Stream removed, broadcasting: 1\nI0822 19:56:31.720098    3254 log.go:172] (0xc000a93970) (0xc000671d60) Stream removed, broadcasting: 3\nI0822 19:56:31.720109    3254 log.go:172] (0xc000a93970) (0xc0005ae960) Stream removed, broadcasting: 5\n"
Aug 22 19:56:31.725: INFO: stdout: ""
Aug 22 19:56:31.725: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:56:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8593" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.247 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":211,"skipped":3365,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:56:31.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 22 19:56:31.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 22 19:56:43.200: INFO: >>> kubeConfig: /root/.kube/config
Aug 22 19:56:46.093: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:56:55.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3204" for this suite.

• [SLOW TEST:23.797 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":212,"skipped":3399,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:56:55.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-nkpw
STEP: Creating a pod to test atomic-volume-subpath
Aug 22 19:56:56.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-nkpw" in namespace "subpath-9432" to be "success or failure"
Aug 22 19:56:56.261: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.878732ms
Aug 22 19:56:58.423: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167661052s
Aug 22 19:57:00.435: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 4.179078457s
Aug 22 19:57:02.438: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 6.182521685s
Aug 22 19:57:04.442: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 8.186077049s
Aug 22 19:57:06.446: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 10.190463569s
Aug 22 19:57:08.450: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 12.194110092s
Aug 22 19:57:10.454: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 14.198138385s
Aug 22 19:57:12.458: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 16.201793768s
Aug 22 19:57:14.461: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 18.205729323s
Aug 22 19:57:16.465: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 20.209730387s
Aug 22 19:57:18.470: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 22.214113997s
Aug 22 19:57:20.474: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Running", Reason="", readiness=true. Elapsed: 24.218366308s
Aug 22 19:57:22.480: INFO: Pod "pod-subpath-test-secret-nkpw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.223782052s
STEP: Saw pod success
Aug 22 19:57:22.480: INFO: Pod "pod-subpath-test-secret-nkpw" satisfied condition "success or failure"
Aug 22 19:57:22.482: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-nkpw container test-container-subpath-secret-nkpw: 
STEP: delete the pod
Aug 22 19:57:22.728: INFO: Waiting for pod pod-subpath-test-secret-nkpw to disappear
Aug 22 19:57:22.731: INFO: Pod pod-subpath-test-secret-nkpw no longer exists
STEP: Deleting pod pod-subpath-test-secret-nkpw
Aug 22 19:57:22.731: INFO: Deleting pod "pod-subpath-test-secret-nkpw" in namespace "subpath-9432"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:57:22.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9432" for this suite.

• [SLOW TEST:27.188 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":213,"skipped":3404,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:57:22.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2220
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-2220
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2220
Aug 22 19:57:22.911: INFO: Found 0 stateful pods, waiting for 1
Aug 22 19:57:32.915: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 22 19:57:32.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:57:39.360: INFO: stderr: "I0822 19:57:39.213409    3275 log.go:172] (0xc0000f7810) (0xc0006edf40) Create stream\nI0822 19:57:39.213438    3275 log.go:172] (0xc0000f7810) (0xc0006edf40) Stream added, broadcasting: 1\nI0822 19:57:39.216247    3275 log.go:172] (0xc0000f7810) Reply frame received for 1\nI0822 19:57:39.216294    3275 log.go:172] (0xc0000f7810) (0xc00065e6e0) Create stream\nI0822 19:57:39.216306    3275 log.go:172] (0xc0000f7810) (0xc00065e6e0) Stream added, broadcasting: 3\nI0822 19:57:39.217536    3275 log.go:172] (0xc0000f7810) Reply frame received for 3\nI0822 19:57:39.217600    3275 log.go:172] (0xc0000f7810) (0xc0005ab360) Create stream\nI0822 19:57:39.217635    3275 log.go:172] (0xc0000f7810) (0xc0005ab360) Stream added, broadcasting: 5\nI0822 19:57:39.218632    3275 log.go:172] (0xc0000f7810) Reply frame received for 5\nI0822 19:57:39.315153    3275 log.go:172] (0xc0000f7810) Data frame received for 5\nI0822 19:57:39.315175    3275 log.go:172] (0xc0005ab360) (5) Data frame handling\nI0822 19:57:39.315189    3275 log.go:172] (0xc0005ab360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:57:39.347123    3275 log.go:172] (0xc0000f7810) Data frame received for 3\nI0822 19:57:39.347158    3275 log.go:172] (0xc00065e6e0) (3) Data frame handling\nI0822 19:57:39.347183    3275 log.go:172] (0xc00065e6e0) (3) Data frame sent\nI0822 19:57:39.347401    3275 log.go:172] (0xc0000f7810) Data frame received for 5\nI0822 19:57:39.347438    3275 log.go:172] (0xc0005ab360) (5) Data frame handling\nI0822 19:57:39.347504    3275 log.go:172] (0xc0000f7810) Data frame received for 3\nI0822 19:57:39.347556    3275 log.go:172] (0xc00065e6e0) (3) Data frame handling\nI0822 19:57:39.349270    3275 log.go:172] (0xc0000f7810) Data frame received for 1\nI0822 19:57:39.349308    3275 log.go:172] (0xc0006edf40) (1) Data frame handling\nI0822 19:57:39.349345    3275 log.go:172] (0xc0006edf40) (1) Data frame sent\nI0822 19:57:39.349377    3275 log.go:172] (0xc0000f7810) (0xc0006edf40) Stream removed, broadcasting: 1\nI0822 19:57:39.349581    3275 log.go:172] (0xc0000f7810) Go away received\nI0822 19:57:39.349816    3275 log.go:172] (0xc0000f7810) (0xc0006edf40) Stream removed, broadcasting: 1\nI0822 19:57:39.349838    3275 log.go:172] (0xc0000f7810) (0xc00065e6e0) Stream removed, broadcasting: 3\nI0822 19:57:39.349861    3275 log.go:172] (0xc0000f7810) (0xc0005ab360) Stream removed, broadcasting: 5\n"
Aug 22 19:57:39.361: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:57:39.361: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:57:39.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 22 19:57:49.447: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:57:49.447: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:57:49.471: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 22 19:57:49.471: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:57:49.472: INFO: 
Aug 22 19:57:49.472: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 22 19:57:50.476: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98619784s
Aug 22 19:57:51.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982144098s
Aug 22 19:57:53.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.534679468s
Aug 22 19:57:54.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.372509757s
Aug 22 19:57:55.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.30336802s
Aug 22 19:57:56.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.165272948s
Aug 22 19:57:57.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.160518845s
Aug 22 19:57:58.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.056968491s
Aug 22 19:57:59.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 22.220867ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2220
Aug 22 19:58:00.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:58:00.747: INFO: stderr: "I0822 19:58:00.661106    3309 log.go:172] (0xc000105600) (0xc0002bdea0) Create stream\nI0822 19:58:00.661169    3309 log.go:172] (0xc000105600) (0xc0002bdea0) Stream added, broadcasting: 1\nI0822 19:58:00.663416    3309 log.go:172] (0xc000105600) Reply frame received for 1\nI0822 19:58:00.663471    3309 log.go:172] (0xc000105600) (0xc0008a9e00) Create stream\nI0822 19:58:00.663483    3309 log.go:172] (0xc000105600) (0xc0008a9e00) Stream added, broadcasting: 3\nI0822 19:58:00.664451    3309 log.go:172] (0xc000105600) Reply frame received for 3\nI0822 19:58:00.664486    3309 log.go:172] (0xc000105600) (0xc0002bdf40) Create stream\nI0822 19:58:00.664496    3309 log.go:172] (0xc000105600) (0xc0002bdf40) Stream added, broadcasting: 5\nI0822 19:58:00.665316    3309 log.go:172] (0xc000105600) Reply frame received for 5\nI0822 19:58:00.736166    3309 log.go:172] (0xc000105600) Data frame received for 3\nI0822 19:58:00.736198    3309 log.go:172] (0xc0008a9e00) (3) Data frame handling\nI0822 19:58:00.736207    3309 log.go:172] (0xc0008a9e00) (3) Data frame sent\nI0822 19:58:00.736214    3309 log.go:172] (0xc000105600) Data frame received for 3\nI0822 19:58:00.736219    3309 log.go:172] (0xc0008a9e00) (3) Data frame handling\nI0822 19:58:00.736245    3309 log.go:172] (0xc000105600) Data frame received for 5\nI0822 19:58:00.736253    3309 log.go:172] (0xc0002bdf40) (5) Data frame handling\nI0822 19:58:00.736261    3309 log.go:172] (0xc0002bdf40) (5) Data frame sent\nI0822 19:58:00.736269    3309 log.go:172] (0xc000105600) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0822 19:58:00.736278    3309 log.go:172] (0xc0002bdf40) (5) Data frame handling\nI0822 19:58:00.737454    3309 log.go:172] (0xc000105600) Data frame received for 1\nI0822 19:58:00.737472    3309 log.go:172] (0xc0002bdea0) (1) Data frame handling\nI0822 19:58:00.737480    3309 log.go:172] (0xc0002bdea0) (1) Data frame sent\nI0822 19:58:00.737499    3309 log.go:172] (0xc000105600) (0xc0002bdea0) Stream removed, broadcasting: 1\nI0822 19:58:00.737517    3309 log.go:172] (0xc000105600) Go away received\nI0822 19:58:00.737866    3309 log.go:172] (0xc000105600) (0xc0002bdea0) Stream removed, broadcasting: 1\nI0822 19:58:00.737888    3309 log.go:172] (0xc000105600) (0xc0008a9e00) Stream removed, broadcasting: 3\nI0822 19:58:00.737895    3309 log.go:172] (0xc000105600) (0xc0002bdf40) Stream removed, broadcasting: 5\n"
Aug 22 19:58:00.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:58:00.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:58:00.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:58:00.980: INFO: stderr: "I0822 19:58:00.884426    3331 log.go:172] (0xc00059adc0) (0xc00069dae0) Create stream\nI0822 19:58:00.884487    3331 log.go:172] (0xc00059adc0) (0xc00069dae0) Stream added, broadcasting: 1\nI0822 19:58:00.886650    3331 log.go:172] (0xc00059adc0) Reply frame received for 1\nI0822 19:58:00.886675    3331 log.go:172] (0xc00059adc0) (0xc0009b0000) Create stream\nI0822 19:58:00.886682    3331 log.go:172] (0xc00059adc0) (0xc0009b0000) Stream added, broadcasting: 3\nI0822 19:58:00.887343    3331 log.go:172] (0xc00059adc0) Reply frame received for 3\nI0822 19:58:00.887365    3331 log.go:172] (0xc00059adc0) (0xc00069dcc0) Create stream\nI0822 19:58:00.887371    3331 log.go:172] (0xc00059adc0) (0xc00069dcc0) Stream added, broadcasting: 5\nI0822 19:58:00.887974    3331 log.go:172] (0xc00059adc0) Reply frame received for 5\nI0822 19:58:00.971347    3331 log.go:172] (0xc00059adc0) Data frame received for 5\nI0822 19:58:00.971379    3331 log.go:172] (0xc00069dcc0) (5) Data frame handling\nI0822 19:58:00.971396    3331 log.go:172] (0xc00069dcc0) (5) Data frame sent\nI0822 19:58:00.971409    3331 log.go:172] (0xc00059adc0) Data frame received for 5\nI0822 19:58:00.971418    3331 log.go:172] (0xc00069dcc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0822 19:58:00.971461    3331 log.go:172] (0xc00059adc0) Data frame received for 3\nI0822 19:58:00.971493    3331 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0822 19:58:00.971512    3331 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0822 19:58:00.971531    3331 log.go:172] (0xc00059adc0) Data frame received for 3\nI0822 19:58:00.971546    3331 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0822 19:58:00.972522    3331 log.go:172] (0xc00059adc0) Data frame received for 1\nI0822 19:58:00.972542    3331 log.go:172] (0xc00069dae0) (1) Data frame handling\nI0822 19:58:00.972552    3331 log.go:172] (0xc00069dae0) (1) Data frame sent\nI0822 19:58:00.972566    3331 log.go:172] (0xc00059adc0) (0xc00069dae0) Stream removed, broadcasting: 1\nI0822 19:58:00.972627    3331 log.go:172] (0xc00059adc0) Go away received\nI0822 19:58:00.972956    3331 log.go:172] (0xc00059adc0) (0xc00069dae0) Stream removed, broadcasting: 1\nI0822 19:58:00.972972    3331 log.go:172] (0xc00059adc0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0822 19:58:00.972980    3331 log.go:172] (0xc00059adc0) (0xc00069dcc0) Stream removed, broadcasting: 5\n"
Aug 22 19:58:00.980: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:58:00.980: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:58:00.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 22 19:58:01.227: INFO: stderr: "I0822 19:58:01.154258    3352 log.go:172] (0xc000a686e0) (0xc000ab63c0) Create stream\nI0822 19:58:01.154338    3352 log.go:172] (0xc000a686e0) (0xc000ab63c0) Stream added, broadcasting: 1\nI0822 19:58:01.156935    3352 log.go:172] (0xc000a686e0) Reply frame received for 1\nI0822 19:58:01.157045    3352 log.go:172] (0xc000a686e0) (0xc000ab6460) Create stream\nI0822 19:58:01.157098    3352 log.go:172] (0xc000a686e0) (0xc000ab6460) Stream added, broadcasting: 3\nI0822 19:58:01.159756    3352 log.go:172] (0xc000a686e0) Reply frame received for 3\nI0822 19:58:01.159799    3352 log.go:172] (0xc000a686e0) (0xc0001fda40) Create stream\nI0822 19:58:01.159810    3352 log.go:172] (0xc000a686e0) (0xc0001fda40) Stream added, broadcasting: 5\nI0822 19:58:01.161053    3352 log.go:172] (0xc000a686e0) Reply frame received for 5\nI0822 19:58:01.218505    3352 log.go:172] (0xc000a686e0) Data frame received for 5\nI0822 19:58:01.218536    3352 log.go:172] (0xc0001fda40) (5) Data frame handling\nI0822 19:58:01.218551    3352 log.go:172] (0xc0001fda40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0822 19:58:01.218567    3352 log.go:172] (0xc000a686e0) Data frame received for 3\nI0822 19:58:01.218574    3352 log.go:172] (0xc000ab6460) (3) Data frame handling\nI0822 19:58:01.218581    3352 log.go:172] (0xc000ab6460) (3) Data frame sent\nI0822 19:58:01.218589    3352 log.go:172] (0xc000a686e0) Data frame received for 3\nI0822 19:58:01.218594    3352 log.go:172] (0xc000ab6460) (3) Data frame handling\nI0822 19:58:01.218633    3352 log.go:172] (0xc000a686e0) Data frame received for 5\nI0822 19:58:01.218660    3352 log.go:172] (0xc0001fda40) (5) Data frame handling\nI0822 19:58:01.220110    3352 log.go:172] (0xc000a686e0) Data frame received for 1\nI0822 19:58:01.220137    3352 log.go:172] (0xc000ab63c0) (1) Data frame handling\nI0822 19:58:01.220173    3352 log.go:172] (0xc000ab63c0) (1) Data frame sent\nI0822 19:58:01.220192    3352 log.go:172] (0xc000a686e0) (0xc000ab63c0) Stream removed, broadcasting: 1\nI0822 19:58:01.220218    3352 log.go:172] (0xc000a686e0) Go away received\nI0822 19:58:01.220649    3352 log.go:172] (0xc000a686e0) (0xc000ab63c0) Stream removed, broadcasting: 1\nI0822 19:58:01.220678    3352 log.go:172] (0xc000a686e0) (0xc000ab6460) Stream removed, broadcasting: 3\nI0822 19:58:01.220693    3352 log.go:172] (0xc000a686e0) (0xc0001fda40) Stream removed, broadcasting: 5\n"
Aug 22 19:58:01.228: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 22 19:58:01.228: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 22 19:58:01.244: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug 22 19:58:11.248: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:58:11.248: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 19:58:11.248: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 22 19:58:11.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:58:11.481: INFO: stderr: "I0822 19:58:11.400180    3371 log.go:172] (0xc000b1ab00) (0xc0006c5f40) Create stream\nI0822 19:58:11.400250    3371 log.go:172] (0xc000b1ab00) (0xc0006c5f40) Stream added, broadcasting: 1\nI0822 19:58:11.403123    3371 log.go:172] (0xc000b1ab00) Reply frame received for 1\nI0822 19:58:11.403178    3371 log.go:172] (0xc000b1ab00) (0xc00060a820) Create stream\nI0822 19:58:11.403195    3371 log.go:172] (0xc000b1ab00) (0xc00060a820) Stream added, broadcasting: 3\nI0822 19:58:11.404195    3371 log.go:172] (0xc000b1ab00) Reply frame received for 3\nI0822 19:58:11.404239    3371 log.go:172] (0xc000b1ab00) (0xc00046d5e0) Create stream\nI0822 19:58:11.404249    3371 log.go:172] (0xc000b1ab00) (0xc00046d5e0) Stream added, broadcasting: 5\nI0822 19:58:11.405460    3371 log.go:172] (0xc000b1ab00) Reply frame received for 5\nI0822 19:58:11.471166    3371 log.go:172] (0xc000b1ab00) Data frame received for 3\nI0822 19:58:11.471197    3371 log.go:172] (0xc00060a820) (3) Data frame handling\nI0822 19:58:11.471209    3371 log.go:172] (0xc00060a820) (3) Data frame sent\nI0822 19:58:11.471233    3371 log.go:172] (0xc000b1ab00) Data frame received for 5\nI0822 19:58:11.471241    3371 log.go:172] (0xc00046d5e0) (5) Data frame handling\nI0822 19:58:11.471249    3371 log.go:172] (0xc00046d5e0) (5) Data frame sent\nI0822 19:58:11.471257    3371 log.go:172] (0xc000b1ab00) Data frame received for 5\nI0822 19:58:11.471264    3371 log.go:172] (0xc00046d5e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:58:11.471357    3371 log.go:172] (0xc000b1ab00) Data frame received for 3\nI0822 19:58:11.471374    3371 log.go:172] (0xc00060a820) (3) Data frame handling\nI0822 19:58:11.473272    3371 log.go:172] (0xc000b1ab00) Data frame received for 1\nI0822 19:58:11.473301    3371 log.go:172] (0xc0006c5f40) (1) Data frame handling\nI0822 19:58:11.473314    3371 log.go:172] (0xc0006c5f40) (1) Data frame sent\nI0822 19:58:11.473330    3371 log.go:172] (0xc000b1ab00) (0xc0006c5f40) Stream removed, broadcasting: 1\nI0822 19:58:11.473350    3371 log.go:172] (0xc000b1ab00) Go away received\nI0822 19:58:11.473654    3371 log.go:172] (0xc000b1ab00) (0xc0006c5f40) Stream removed, broadcasting: 1\nI0822 19:58:11.473674    3371 log.go:172] (0xc000b1ab00) (0xc00060a820) Stream removed, broadcasting: 3\nI0822 19:58:11.473683    3371 log.go:172] (0xc000b1ab00) (0xc00046d5e0) Stream removed, broadcasting: 5\n"
Aug 22 19:58:11.481: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:58:11.481: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:58:11.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:58:11.731: INFO: stderr: "I0822 19:58:11.603693    3392 log.go:172] (0xc00095a000) (0xc00087a000) Create stream\nI0822 19:58:11.603763    3392 log.go:172] (0xc00095a000) (0xc00087a000) Stream added, broadcasting: 1\nI0822 19:58:11.609561    3392 log.go:172] (0xc00095a000) Reply frame received for 1\nI0822 19:58:11.609603    3392 log.go:172] (0xc00095a000) (0xc0009ba000) Create stream\nI0822 19:58:11.609623    3392 log.go:172] (0xc00095a000) (0xc0009ba000) Stream added, broadcasting: 3\nI0822 19:58:11.610775    3392 log.go:172] (0xc00095a000) Reply frame received for 3\nI0822 19:58:11.610823    3392 log.go:172] (0xc00095a000) (0xc000708aa0) Create stream\nI0822 19:58:11.610841    3392 log.go:172] (0xc00095a000) (0xc000708aa0) Stream added, broadcasting: 5\nI0822 19:58:11.611828    3392 log.go:172] (0xc00095a000) Reply frame received for 5\nI0822 19:58:11.679341    3392 log.go:172] (0xc00095a000) Data frame received for 5\nI0822 19:58:11.679367    3392 log.go:172] (0xc000708aa0) (5) Data frame handling\nI0822 19:58:11.679387    3392 log.go:172] (0xc000708aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:58:11.718943    3392 log.go:172] (0xc00095a000) Data frame received for 3\nI0822 19:58:11.718977    3392 log.go:172] (0xc0009ba000) (3) Data frame handling\nI0822 19:58:11.718995    3392 log.go:172] (0xc0009ba000) (3) Data frame sent\nI0822 19:58:11.719003    3392 log.go:172] (0xc00095a000) Data frame received for 3\nI0822 19:58:11.719009    3392 log.go:172] (0xc0009ba000) (3) Data frame handling\nI0822 19:58:11.719285    3392 log.go:172] (0xc00095a000) Data frame received for 5\nI0822 19:58:11.719305    3392 log.go:172] (0xc000708aa0) (5) Data frame handling\nI0822 19:58:11.721099    3392 log.go:172] (0xc00095a000) Data frame received for 1\nI0822 19:58:11.721111    3392 log.go:172] (0xc00087a000) (1) Data frame handling\nI0822 19:58:11.721117    3392 log.go:172] (0xc00087a000) (1) Data frame sent\nI0822 19:58:11.721124    3392 log.go:172] (0xc00095a000) (0xc00087a000) Stream removed, broadcasting: 1\nI0822 19:58:11.721257    3392 log.go:172] (0xc00095a000) Go away received\nI0822 19:58:11.721355    3392 log.go:172] (0xc00095a000) (0xc00087a000) Stream removed, broadcasting: 1\nI0822 19:58:11.721368    3392 log.go:172] (0xc00095a000) (0xc0009ba000) Stream removed, broadcasting: 3\nI0822 19:58:11.721373    3392 log.go:172] (0xc00095a000) (0xc000708aa0) Stream removed, broadcasting: 5\n"
Aug 22 19:58:11.731: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:58:11.731: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:58:11.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2220 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 22 19:58:11.971: INFO: stderr: "I0822 19:58:11.851465    3411 log.go:172] (0xc0000f6d10) (0xc00067a000) Create stream\nI0822 19:58:11.851510    3411 log.go:172] (0xc0000f6d10) (0xc00067a000) Stream added, broadcasting: 1\nI0822 19:58:11.854602    3411 log.go:172] (0xc0000f6d10) Reply frame received for 1\nI0822 19:58:11.854646    3411 log.go:172] (0xc0000f6d10) (0xc0006fb9a0) Create stream\nI0822 19:58:11.854658    3411 log.go:172] (0xc0000f6d10) (0xc0006fb9a0) Stream added, broadcasting: 3\nI0822 19:58:11.855775    3411 log.go:172] (0xc0000f6d10) Reply frame received for 3\nI0822 19:58:11.855815    3411 log.go:172] (0xc0000f6d10) (0xc00067a140) Create stream\nI0822 19:58:11.855828    3411 log.go:172] (0xc0000f6d10) (0xc00067a140) Stream added, broadcasting: 5\nI0822 19:58:11.857066    3411 log.go:172] (0xc0000f6d10) Reply frame received for 5\nI0822 19:58:11.930207    3411 log.go:172] (0xc0000f6d10) Data frame received for 5\nI0822 19:58:11.930227    3411 log.go:172] (0xc00067a140) (5) Data frame handling\nI0822 19:58:11.930237    3411 log.go:172] (0xc00067a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0822 19:58:11.964619    3411 log.go:172] (0xc0000f6d10) Data frame received for 3\nI0822 19:58:11.964665    3411 log.go:172] (0xc0006fb9a0) (3) Data frame handling\nI0822 19:58:11.964693    3411 log.go:172] (0xc0006fb9a0) (3) Data frame sent\nI0822 19:58:11.964719    3411 log.go:172] (0xc0000f6d10) Data frame received for 3\nI0822 19:58:11.964844    3411 log.go:172] (0xc0006fb9a0) (3) Data frame handling\nI0822 19:58:11.964947    3411 log.go:172] (0xc0000f6d10) Data frame received for 5\nI0822 19:58:11.964987    3411 log.go:172] (0xc00067a140) (5) Data frame handling\nI0822 19:58:11.966510    3411 log.go:172] (0xc0000f6d10) Data frame received for 1\nI0822 19:58:11.966530    3411 log.go:172] (0xc00067a000) (1) Data frame handling\nI0822 19:58:11.966537    3411 log.go:172] (0xc00067a000) (1) Data frame sent\nI0822 19:58:11.966549    3411 log.go:172] (0xc0000f6d10) (0xc00067a000) Stream removed, broadcasting: 1\nI0822 19:58:11.966591    3411 log.go:172] (0xc0000f6d10) Go away received\nI0822 19:58:11.966803    3411 log.go:172] (0xc0000f6d10) (0xc00067a000) Stream removed, broadcasting: 1\nI0822 19:58:11.966815    3411 log.go:172] (0xc0000f6d10) (0xc0006fb9a0) Stream removed, broadcasting: 3\nI0822 19:58:11.966821    3411 log.go:172] (0xc0000f6d10) (0xc00067a140) Stream removed, broadcasting: 5\n"
Aug 22 19:58:11.972: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 22 19:58:11.972: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 22 19:58:11.972: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:58:11.977: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug 22 19:58:21.985: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:58:21.985: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:58:21.985: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 22 19:58:22.035: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:22.035: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:22.035: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:22.035: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:22.035: INFO: 
Aug 22 19:58:22.035: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:23.059: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:23.059: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:23.059: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:23.059: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:23.059: INFO: 
Aug 22 19:58:23.059: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:24.065: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:24.065: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:24.065: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:24.065: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:24.065: INFO: 
Aug 22 19:58:24.065: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:25.089: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:25.089: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:25.089: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:25.089: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:25.089: INFO: 
Aug 22 19:58:25.089: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:26.094: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:26.094: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:26.094: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:26.094: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:26.094: INFO: 
Aug 22 19:58:26.094: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:27.098: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:27.099: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:27.099: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:27.099: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:27.099: INFO: 
Aug 22 19:58:27.099: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:28.103: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:28.103: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:28.103: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:28.104: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:28.104: INFO: 
Aug 22 19:58:28.104: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:29.109: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:29.109: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:29.109: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:29.109: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:29.109: INFO: 
Aug 22 19:58:29.109: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:30.114: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:30.114: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:30.114: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:30.114: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:30.114: INFO: 
Aug 22 19:58:30.114: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 22 19:58:31.119: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 22 19:58:31.119: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:23 +0000 UTC  }]
Aug 22 19:58:31.119: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:31.119: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:58:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-22 19:57:49 +0000 UTC  }]
Aug 22 19:58:31.119: INFO: 
Aug 22 19:58:31.119: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2220
Aug 22 19:58:32.123: INFO: Scaling statefulset ss to 0
Aug 22 19:58:32.131: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 19:58:32.134: INFO: Deleting all statefulset in ns statefulset-2220
Aug 22 19:58:32.136: INFO: Scaling statefulset ss to 0
Aug 22 19:58:32.143: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 19:58:32.145: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:58:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2220" for this suite.

• [SLOW TEST:69.436 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":214,"skipped":3405,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:58:32.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 22 19:58:32.251: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:58:39.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9128" for this suite.

• [SLOW TEST:7.457 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":215,"skipped":3406,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:58:39.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359
Aug 22 19:58:39.957: INFO: Pod name my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359: Found 0 pods out of 1
Aug 22 19:58:44.970: INFO: Pod name my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359: Found 1 pods out of 1
Aug 22 19:58:44.970: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359" are running
Aug 22 19:58:44.972: INFO: Pod "my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359-fcvk5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 19:58:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 19:58:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 19:58:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 19:58:39 +0000 UTC Reason: Message:}])
Aug 22 19:58:44.972: INFO: Trying to dial the pod
Aug 22 19:58:49.982: INFO: Controller my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359: Got expected result from replica 1 [my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359-fcvk5]: "my-hostname-basic-c3107f84-6f14-4656-b8d8-ab91dd7f2359-fcvk5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:58:49.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8202" for this suite.

• [SLOW TEST:10.353 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":216,"skipped":3413,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:58:49.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 19:58:50.928: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 19:58:52.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723131, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723131, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723131, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723130, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 19:58:55.999: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 19:58:56.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4512-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:58:57.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-957" for this suite.
STEP: Destroying namespace "webhook-957-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.641 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":217,"skipped":3431,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:58:57.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 22 19:59:05.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:05.842: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:07.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:08.168: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:09.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:09.847: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:11.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:11.846: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:13.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:13.845: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:15.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:15.846: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:17.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:17.851: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:19.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:19.846: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 19:59:21.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 19:59:21.846: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:59:21.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8530" for this suite.

• [SLOW TEST:24.224 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:59:21.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Aug 22 19:59:21.964: INFO: Waiting up to 5m0s for pod "var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed" in namespace "var-expansion-6807" to be "success or failure"
Aug 22 19:59:21.992: INFO: Pod "var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058544ms
Aug 22 19:59:24.059: INFO: Pod "var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095280594s
Aug 22 19:59:26.063: INFO: Pod "var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099026538s
STEP: Saw pod success
Aug 22 19:59:26.063: INFO: Pod "var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed" satisfied condition "success or failure"
Aug 22 19:59:26.066: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed container dapi-container: 
STEP: delete the pod
Aug 22 19:59:26.115: INFO: Waiting for pod var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed to disappear
Aug 22 19:59:26.335: INFO: Pod var-expansion-9f31281f-de42-45b0-ab60-d0f80f5d56ed no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:59:26.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6807" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3492,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:59:26.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 22 19:59:26.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 19:59:40.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5155" for this suite.

• [SLOW TEST:14.097 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":220,"skipped":3526,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 19:59:40.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:00:01.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9175" for this suite.

• [SLOW TEST:21.109 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":221,"skipped":3543,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:00:01.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:00:02.099: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:00:04.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:00:06.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723202, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:00:09.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:00:09.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7889" for this suite.
STEP: Destroying namespace "webhook-7889-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.800 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":222,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:00:09.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-9025/secret-test-b648c1a7-b598-45e8-af37-bbb78bf1a8f7
STEP: Creating a pod to test consume secrets
Aug 22 20:00:09.484: INFO: Waiting up to 5m0s for pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817" in namespace "secrets-9025" to be "success or failure"
Aug 22 20:00:09.499: INFO: Pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817": Phase="Pending", Reason="", readiness=false. Elapsed: 15.036775ms
Aug 22 20:00:11.502: INFO: Pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018717823s
Aug 22 20:00:13.506: INFO: Pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817": Phase="Running", Reason="", readiness=true. Elapsed: 4.022663591s
Aug 22 20:00:15.563: INFO: Pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079067391s
STEP: Saw pod success
Aug 22 20:00:15.563: INFO: Pod "pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817" satisfied condition "success or failure"
Aug 22 20:00:15.565: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817 container env-test: 
STEP: delete the pod
Aug 22 20:00:15.660: INFO: Waiting for pod pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817 to disappear
Aug 22 20:00:15.739: INFO: Pod pod-configmaps-a61d5670-93f6-4cf7-8690-0645d80dc817 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:00:15.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9025" for this suite.

• [SLOW TEST:6.391 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3583,"failed":0}
SS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:00:15.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:00:15.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3858" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":224,"skipped":3585,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:00:16.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 22 20:00:16.374: INFO: Waiting up to 5m0s for pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb" in namespace "var-expansion-6836" to be "success or failure"
Aug 22 20:00:16.376: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775528ms
Aug 22 20:00:18.449: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075248234s
Aug 22 20:00:20.472: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098870105s
Aug 22 20:00:22.476: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb": Phase="Running", Reason="", readiness=true. Elapsed: 6.101889156s
Aug 22 20:00:24.479: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105219418s
STEP: Saw pod success
Aug 22 20:00:24.479: INFO: Pod "var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb" satisfied condition "success or failure"
Aug 22 20:00:24.481: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb container dapi-container: 
STEP: delete the pod
Aug 22 20:00:24.497: INFO: Waiting for pod var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb to disappear
Aug 22 20:00:24.502: INFO: Pod var-expansion-cbc6d467-d7ac-4a89-8b2c-b890acdf4cfb no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:00:24.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6836" for this suite.

• [SLOW TEST:8.421 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3594,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:00:24.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 22 20:00:24.546: INFO: PodSpec: initContainers in spec.initContainers
Aug 22 20:01:18.803: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e4d6a670-cfc8-42a2-84ab-127608845c88", GenerateName:"", Namespace:"init-container-734", SelfLink:"/api/v1/namespaces/init-container-734/pods/pod-init-e4d6a670-cfc8-42a2-84ab-127608845c88", UID:"285587e2-6e5a-4242-8320-9679948ee467", ResourceVersion:"2558996", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733723224, loc:(*time.Location)(0x7931640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"546136234"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tbbs2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005e8b780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tbbs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tbbs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tbbs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003b72f68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c790e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003b72ff0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003b73020)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003b73028), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003b7302c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723224, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723224, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723224, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723224, loc:(*time.Location)(0x7931640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.175", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.175"}}, StartTime:(*v1.Time)(0xc004138f00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004138f40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002bf5e30)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e7bdea02234023294b7e19efbffff15432d1a6601164bc0d8e8522bb72b6b9ed", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004138f60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004138f20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003b730bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:01:18.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-734" for this suite.

• [SLOW TEST:54.518 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":226,"skipped":3646,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:01:19.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-a35510a7-f731-43b9-baa0-66a7b050100b
STEP: Creating a pod to test consume configMaps
Aug 22 20:01:21.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe" in namespace "configmap-4588" to be "success or failure"
Aug 22 20:01:21.710: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.213608ms
Aug 22 20:01:23.785: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088988406s
Aug 22 20:01:25.809: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112359425s
Aug 22 20:01:27.812: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11609444s
Aug 22 20:01:29.817: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120562168s
STEP: Saw pod success
Aug 22 20:01:29.817: INFO: Pod "pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe" satisfied condition "success or failure"
Aug 22 20:01:29.820: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe container configmap-volume-test: 
STEP: delete the pod
Aug 22 20:01:30.009: INFO: Waiting for pod pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe to disappear
Aug 22 20:01:30.277: INFO: Pod pod-configmaps-58747055-3b21-45a9-8f35-76525793fcfe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:01:30.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4588" for this suite.

• [SLOW TEST:11.259 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3670,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:01:30.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 22 20:01:31.111: INFO: Waiting up to 5m0s for pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6" in namespace "emptydir-5438" to be "success or failure"
Aug 22 20:01:31.312: INFO: Pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6": Phase="Pending", Reason="", readiness=false. Elapsed: 200.966912ms
Aug 22 20:01:33.743: INFO: Pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.63202299s
Aug 22 20:01:35.947: INFO: Pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835621033s
Aug 22 20:01:38.006: INFO: Pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.895047081s
STEP: Saw pod success
Aug 22 20:01:38.006: INFO: Pod "pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6" satisfied condition "success or failure"
Aug 22 20:01:38.009: INFO: Trying to get logs from node jerma-worker2 pod pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6 container test-container: 
STEP: delete the pod
Aug 22 20:01:38.194: INFO: Waiting for pod pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6 to disappear
Aug 22 20:01:38.223: INFO: Pod pod-ca6a1f36-3de4-4a9f-b1b8-f3612c6258e6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:01:38.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5438" for this suite.

• [SLOW TEST:7.944 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3682,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:01:38.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8838
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-8838
Aug 22 20:01:38.618: INFO: Found 0 stateful pods, waiting for 1
Aug 22 20:01:48.623: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 20:01:48.881: INFO: Deleting all statefulset in ns statefulset-8838
Aug 22 20:01:48.884: INFO: Scaling statefulset ss to 0
Aug 22 20:02:19.150: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 20:02:19.153: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:02:19.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8838" for this suite.

• [SLOW TEST:41.947 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":229,"skipped":3699,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:02:20.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:02:20.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 22 20:02:22.185: INFO: stderr: ""
Aug 22 20:02:22.185: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:02:22.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-338" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":230,"skipped":3713,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:02:22.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-9742
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9742 to expose endpoints map[]
Aug 22 20:02:23.942: INFO: Get endpoints failed (27.953698ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 22 20:02:24.946: INFO: successfully validated that service multi-endpoint-test in namespace services-9742 exposes endpoints map[] (1.031434829s elapsed)
STEP: Creating pod pod1 in namespace services-9742
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9742 to expose endpoints map[pod1:[100]]
Aug 22 20:02:30.459: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.506338589s elapsed, will retry)
Aug 22 20:02:31.643: INFO: successfully validated that service multi-endpoint-test in namespace services-9742 exposes endpoints map[pod1:[100]] (6.69019829s elapsed)
STEP: Creating pod pod2 in namespace services-9742
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9742 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 22 20:02:36.519: INFO: Unexpected endpoints: found map[25135644-a912-4b82-acca-9ace9dffb2c0:[100]], expected map[pod1:[100] pod2:[101]] (4.825744945s elapsed, will retry)
Aug 22 20:02:42.873: INFO: Unexpected endpoints: found map[25135644-a912-4b82-acca-9ace9dffb2c0:[100]], expected map[pod1:[100] pod2:[101]] (11.179417472s elapsed, will retry)
Aug 22 20:02:43.910: INFO: successfully validated that service multi-endpoint-test in namespace services-9742 exposes endpoints map[pod1:[100] pod2:[101]] (12.21626676s elapsed)
STEP: Deleting pod pod1 in namespace services-9742
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9742 to expose endpoints map[pod2:[101]]
Aug 22 20:02:45.356: INFO: successfully validated that service multi-endpoint-test in namespace services-9742 exposes endpoints map[pod2:[101]] (1.442648517s elapsed)
STEP: Deleting pod pod2 in namespace services-9742
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9742 to expose endpoints map[]
Aug 22 20:02:46.433: INFO: successfully validated that service multi-endpoint-test in namespace services-9742 exposes endpoints map[] (1.072548432s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:02:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9742" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:25.365 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":231,"skipped":3751,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:02:48.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-bl82
STEP: Creating a pod to test atomic-volume-subpath
Aug 22 20:02:48.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bl82" in namespace "subpath-954" to be "success or failure"
Aug 22 20:02:48.807: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Pending", Reason="", readiness=false. Elapsed: 9.451192ms
Aug 22 20:02:51.006: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208162262s
Aug 22 20:02:53.009: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211842982s
Aug 22 20:02:55.014: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 6.216456557s
Aug 22 20:02:57.018: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 8.22056935s
Aug 22 20:02:59.022: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 10.224227709s
Aug 22 20:03:01.199: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 12.401433249s
Aug 22 20:03:03.402: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 14.604884578s
Aug 22 20:03:05.405: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 16.607684145s
Aug 22 20:03:07.412: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 18.614282966s
Aug 22 20:03:09.415: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 20.617833772s
Aug 22 20:03:11.696: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 22.898593345s
Aug 22 20:03:13.700: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Running", Reason="", readiness=true. Elapsed: 24.902193267s
Aug 22 20:03:15.704: INFO: Pod "pod-subpath-test-projected-bl82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.906572764s
STEP: Saw pod success
Aug 22 20:03:15.704: INFO: Pod "pod-subpath-test-projected-bl82" satisfied condition "success or failure"
Aug 22 20:03:15.707: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-bl82 container test-container-subpath-projected-bl82: 
STEP: delete the pod
Aug 22 20:03:15.750: INFO: Waiting for pod pod-subpath-test-projected-bl82 to disappear
Aug 22 20:03:15.758: INFO: Pod pod-subpath-test-projected-bl82 no longer exists
STEP: Deleting pod pod-subpath-test-projected-bl82
Aug 22 20:03:15.758: INFO: Deleting pod "pod-subpath-test-projected-bl82" in namespace "subpath-954"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:15.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-954" for this suite.

• [SLOW TEST:27.530 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":232,"skipped":3785,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:15.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8430" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3797,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:15.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 22 20:03:20.678: INFO: Successfully updated pod "labelsupdate72651ddd-7186-4741-b841-578961fe83b5"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:22.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1485" for this suite.

• [SLOW TEST:6.977 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3816,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:22.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 22 20:03:23.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 22 20:03:23.923: INFO: stderr: ""
Aug 22 20:03:23.923: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:23.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9932" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":235,"skipped":3830,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:23.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 20:03:24.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6961'
Aug 22 20:03:24.855: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 20:03:24.855: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 22 20:03:27.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6961'
Aug 22 20:03:27.399: INFO: stderr: ""
Aug 22 20:03:27.399: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:27.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6961" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":236,"skipped":3830,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:27.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:03:28.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902" in namespace "downward-api-8181" to be "success or failure"
Aug 22 20:03:28.762: INFO: Pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902": Phase="Pending", Reason="", readiness=false. Elapsed: 404.72639ms
Aug 22 20:03:30.942: INFO: Pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58477834s
Aug 22 20:03:32.944: INFO: Pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587098312s
Aug 22 20:03:34.948: INFO: Pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.590915595s
STEP: Saw pod success
Aug 22 20:03:34.948: INFO: Pod "downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902" satisfied condition "success or failure"
Aug 22 20:03:34.950: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902 container client-container: 
STEP: delete the pod
Aug 22 20:03:35.016: INFO: Waiting for pod downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902 to disappear
Aug 22 20:03:35.061: INFO: Pod downwardapi-volume-6a7bd627-0336-44dd-b95a-33b1bbcac902 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:35.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8181" for this suite.

• [SLOW TEST:7.663 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3841,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:35.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 22 20:03:35.265: INFO: Waiting up to 5m0s for pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc" in namespace "emptydir-9783" to be "success or failure"
Aug 22 20:03:35.403: INFO: Pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 137.406017ms
Aug 22 20:03:37.630: INFO: Pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36499172s
Aug 22 20:03:39.635: INFO: Pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc": Phase="Running", Reason="", readiness=true. Elapsed: 4.369600993s
Aug 22 20:03:41.639: INFO: Pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.373201135s
STEP: Saw pod success
Aug 22 20:03:41.639: INFO: Pod "pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc" satisfied condition "success or failure"
Aug 22 20:03:41.642: INFO: Trying to get logs from node jerma-worker pod pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc container test-container: 
STEP: delete the pod
Aug 22 20:03:41.701: INFO: Waiting for pod pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc to disappear
Aug 22 20:03:41.846: INFO: Pod pod-077d9124-d6e4-4c7a-9e4a-e297544c5ddc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:41.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9783" for this suite.

• [SLOW TEST:6.819 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3843,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:41.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 22 20:03:51.543: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:03:53.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4163" for this suite.

• [SLOW TEST:11.699 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":239,"skipped":3868,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:03:53.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:03:56.144: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:03:58.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723437, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:04:00.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723437, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:04:02.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723437, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:04:04.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723437, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723436, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:04:08.227: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:04:08.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-458" for this suite.
STEP: Destroying namespace "webhook-458-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.341 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":240,"skipped":3890,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:04:10.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:04:12.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a" in namespace "downward-api-1876" to be "success or failure"
Aug 22 20:04:12.341: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a": Phase="Pending", Reason="", readiness=false. Elapsed: 70.666403ms
Aug 22 20:04:14.841: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.570474093s
Aug 22 20:04:16.846: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575239913s
Aug 22 20:04:18.954: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683415882s
Aug 22 20:04:20.958: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.687347481s
STEP: Saw pod success
Aug 22 20:04:20.958: INFO: Pod "downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a" satisfied condition "success or failure"
Aug 22 20:04:20.961: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a container client-container: 
STEP: delete the pod
Aug 22 20:04:21.171: INFO: Waiting for pod downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a to disappear
Aug 22 20:04:21.415: INFO: Pod downwardapi-volume-317f7e8b-2211-4f68-b7dc-7872ca85ff8a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:04:21.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1876" for this suite.

• [SLOW TEST:10.493 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3896,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:04:21.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 22 20:04:21.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7858'
Aug 22 20:04:22.097: INFO: stderr: ""
Aug 22 20:04:22.097: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 22 20:04:22.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7858'
Aug 22 20:04:22.283: INFO: stderr: ""
Aug 22 20:04:22.283: INFO: stdout: "update-demo-nautilus-rhjnc update-demo-nautilus-ts82z "
Aug 22 20:04:22.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhjnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:04:22.393: INFO: stderr: ""
Aug 22 20:04:22.393: INFO: stdout: ""
Aug 22 20:04:22.393: INFO: update-demo-nautilus-rhjnc is created but not running
Aug 22 20:04:27.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7858'
Aug 22 20:04:27.588: INFO: stderr: ""
Aug 22 20:04:27.588: INFO: stdout: "update-demo-nautilus-rhjnc update-demo-nautilus-ts82z "
Aug 22 20:04:27.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhjnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:04:27.673: INFO: stderr: ""
Aug 22 20:04:27.673: INFO: stdout: "true"
Aug 22 20:04:27.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhjnc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:04:27.864: INFO: stderr: ""
Aug 22 20:04:27.864: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 22 20:04:27.865: INFO: validating pod update-demo-nautilus-rhjnc
Aug 22 20:04:27.901: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 22 20:04:27.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 22 20:04:27.901: INFO: update-demo-nautilus-rhjnc is verified up and running
Aug 22 20:04:27.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ts82z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:04:28.094: INFO: stderr: ""
Aug 22 20:04:28.094: INFO: stdout: "true"
Aug 22 20:04:28.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ts82z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:04:28.193: INFO: stderr: ""
Aug 22 20:04:28.193: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 22 20:04:28.193: INFO: validating pod update-demo-nautilus-ts82z
Aug 22 20:04:28.196: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 22 20:04:28.197: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 22 20:04:28.197: INFO: update-demo-nautilus-ts82z is verified up and running
STEP: rolling-update to new replication controller
Aug 22 20:04:28.198: INFO: scanned /root for discovery docs: 
Aug 22 20:04:28.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7858'
Aug 22 20:05:04.559: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 22 20:05:04.559: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 22 20:05:04.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7858'
Aug 22 20:05:05.212: INFO: stderr: ""
Aug 22 20:05:05.212: INFO: stdout: "update-demo-kitten-7tw79 update-demo-kitten-rpjpp "
Aug 22 20:05:05.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7tw79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:05:05.694: INFO: stderr: ""
Aug 22 20:05:05.694: INFO: stdout: "true"
Aug 22 20:05:05.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7tw79 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:05:06.285: INFO: stderr: ""
Aug 22 20:05:06.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 22 20:05:06.285: INFO: validating pod update-demo-kitten-7tw79
Aug 22 20:05:06.338: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 22 20:05:06.338: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 22 20:05:06.338: INFO: update-demo-kitten-7tw79 is verified up and running
Aug 22 20:05:06.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rpjpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:05:06.599: INFO: stderr: ""
Aug 22 20:05:06.599: INFO: stdout: "true"
Aug 22 20:05:06.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rpjpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7858'
Aug 22 20:05:06.703: INFO: stderr: ""
Aug 22 20:05:06.703: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 22 20:05:06.703: INFO: validating pod update-demo-kitten-rpjpp
Aug 22 20:05:06.706: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 22 20:05:06.706: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 22 20:05:06.706: INFO: update-demo-kitten-rpjpp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:05:06.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7858" for this suite.

• [SLOW TEST:45.289 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":242,"skipped":3909,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:05:06.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:05:12.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:05:16.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:05:18.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:05:20.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:05:22.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:05:24.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723512, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:05:28.165: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:05:29.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6471" for this suite.
STEP: Destroying namespace "webhook-6471-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.094 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":243,"skipped":3912,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:05:31.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 20:05:32.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1789'
Aug 22 20:05:33.179: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 20:05:33.179: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 22 20:05:33.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1789'
Aug 22 20:05:34.586: INFO: stderr: ""
Aug 22 20:05:34.586: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:05:34.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1789" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":244,"skipped":3914,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:05:35.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:05:58.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2836" for this suite.

• [SLOW TEST:23.154 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":245,"skipped":3916,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:05:58.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0822 20:06:29.321682       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 22 20:06:29.321: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:06:29.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5518" for this suite.

• [SLOW TEST:30.597 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":246,"skipped":3918,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:06:29.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:06:29.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:06:31.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:06:33.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723589, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:06:37.095: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:06:37.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7072-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:06:41.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-222" for this suite.
STEP: Destroying namespace "webhook-222-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.604 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":247,"skipped":3929,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:06:42.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 22 20:06:44.523: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8864 /api/v1/namespaces/watch-8864/configmaps/e2e-watch-test-resource-version 9fba7a1c-dcf1-4a2f-bee5-97f39f05d158 2560762 0 2020-08-22 20:06:44 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 22 20:06:44.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8864 /api/v1/namespaces/watch-8864/configmaps/e2e-watch-test-resource-version 9fba7a1c-dcf1-4a2f-bee5-97f39f05d158 2560763 0 2020-08-22 20:06:44 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:06:44.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8864" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":248,"skipped":3954,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:06:44.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Aug 22 20:06:59.719: INFO: 5 pods remaining
Aug 22 20:06:59.719: INFO: 5 pods has nil DeletionTimestamp
Aug 22 20:06:59.719: INFO: 
STEP: Gathering metrics
W0822 20:07:03.441861       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 22 20:07:03.441: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:07:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6673" for this suite.

• [SLOW TEST:18.889 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":249,"skipped":3957,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:07:03.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:07:27.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2199" for this suite.

• [SLOW TEST:24.292 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":250,"skipped":4012,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:07:27.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-558efbe2-a708-48e4-b9a1-b95f0102a3cd
STEP: Creating a pod to test consume configMaps
Aug 22 20:07:28.054: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5" in namespace "projected-5340" to be "success or failure"
Aug 22 20:07:28.129: INFO: Pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5": Phase="Pending", Reason="", readiness=false. Elapsed: 75.145642ms
Aug 22 20:07:30.231: INFO: Pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176876653s
Aug 22 20:07:32.236: INFO: Pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.181225568s
Aug 22 20:07:34.255: INFO: Pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.200539144s
STEP: Saw pod success
Aug 22 20:07:34.255: INFO: Pod "pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5" satisfied condition "success or failure"
Aug 22 20:07:34.257: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 20:07:34.313: INFO: Waiting for pod pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5 to disappear
Aug 22 20:07:34.322: INFO: Pod pod-projected-configmaps-18fb696e-370f-454a-acc5-d74ee15e15b5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:07:34.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5340" for this suite.

• [SLOW TEST:6.587 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4018,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:07:34.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:08:34.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1222" for this suite.

• [SLOW TEST:60.085 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4075,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:08:34.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:08:34.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada" in namespace "projected-5593" to be "success or failure"
Aug 22 20:08:34.606: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Pending", Reason="", readiness=false. Elapsed: 37.731029ms
Aug 22 20:08:36.610: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042053646s
Aug 22 20:08:39.298: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Pending", Reason="", readiness=false. Elapsed: 4.729189348s
Aug 22 20:08:42.101: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Pending", Reason="", readiness=false. Elapsed: 7.53261676s
Aug 22 20:08:45.784: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Pending", Reason="", readiness=false. Elapsed: 11.215169411s
Aug 22 20:08:47.841: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.272843285s
STEP: Saw pod success
Aug 22 20:08:47.841: INFO: Pod "downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada" satisfied condition "success or failure"
Aug 22 20:08:47.852: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada container client-container: 
STEP: delete the pod
Aug 22 20:08:49.954: INFO: Waiting for pod downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada to disappear
Aug 22 20:08:50.183: INFO: Pod downwardapi-volume-1e8713b4-2615-4b28-beeb-f4fd67b48ada no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:08:50.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5593" for this suite.

• [SLOW TEST:16.824 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4075,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:08:51.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-2ef25bb8-487c-448c-bee3-342d08b9fdaf
STEP: Creating a pod to test consume configMaps
Aug 22 20:08:53.937: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99" in namespace "projected-4542" to be "success or failure"
Aug 22 20:08:54.252: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Pending", Reason="", readiness=false. Elapsed: 315.203701ms
Aug 22 20:08:56.822: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88569319s
Aug 22 20:08:58.890: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953613317s
Aug 22 20:09:01.114: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Pending", Reason="", readiness=false. Elapsed: 7.177286718s
Aug 22 20:09:03.317: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Pending", Reason="", readiness=false. Elapsed: 9.380176989s
Aug 22 20:09:05.321: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Running", Reason="", readiness=true. Elapsed: 11.384279667s
Aug 22 20:09:07.603: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.666343622s
STEP: Saw pod success
Aug 22 20:09:07.603: INFO: Pod "pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99" satisfied condition "success or failure"
Aug 22 20:09:07.606: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 20:09:08.508: INFO: Waiting for pod pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99 to disappear
Aug 22 20:09:08.511: INFO: Pod pod-projected-configmaps-888482e6-dc33-4b46-91dc-750a64140f99 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:09:08.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4542" for this suite.

• [SLOW TEST:17.280 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4080,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:09:08.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-59828c42-d76b-441c-8e87-0c818d69fb42 in namespace container-probe-5300
Aug 22 20:09:14.214: INFO: Started pod busybox-59828c42-d76b-441c-8e87-0c818d69fb42 in namespace container-probe-5300
STEP: checking the pod's current state and verifying that restartCount is present
Aug 22 20:09:14.217: INFO: Initial restart count of pod busybox-59828c42-d76b-441c-8e87-0c818d69fb42 is 0
Aug 22 20:10:04.379: INFO: Restart count of pod container-probe-5300/busybox-59828c42-d76b-441c-8e87-0c818d69fb42 is now 1 (50.162355698s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:10:04.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5300" for this suite.

• [SLOW TEST:55.910 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4104,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:10:04.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:10:05.475: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 22 20:10:05.576: INFO: Number of nodes with available pods: 0
Aug 22 20:10:05.576: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 22 20:10:05.711: INFO: Number of nodes with available pods: 0
Aug 22 20:10:05.711: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:06.715: INFO: Number of nodes with available pods: 0
Aug 22 20:10:06.715: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:07.747: INFO: Number of nodes with available pods: 0
Aug 22 20:10:07.747: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:08.838: INFO: Number of nodes with available pods: 0
Aug 22 20:10:08.838: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:09.844: INFO: Number of nodes with available pods: 0
Aug 22 20:10:09.844: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:10.747: INFO: Number of nodes with available pods: 0
Aug 22 20:10:10.747: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:11.716: INFO: Number of nodes with available pods: 1
Aug 22 20:10:11.716: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 22 20:10:12.020: INFO: Number of nodes with available pods: 1
Aug 22 20:10:12.020: INFO: Number of running nodes: 0, number of available pods: 1
Aug 22 20:10:13.044: INFO: Number of nodes with available pods: 0
Aug 22 20:10:13.044: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 22 20:10:13.181: INFO: Number of nodes with available pods: 0
Aug 22 20:10:13.181: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:14.185: INFO: Number of nodes with available pods: 0
Aug 22 20:10:14.185: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:15.186: INFO: Number of nodes with available pods: 0
Aug 22 20:10:15.186: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:16.189: INFO: Number of nodes with available pods: 0
Aug 22 20:10:16.189: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:17.185: INFO: Number of nodes with available pods: 0
Aug 22 20:10:17.185: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:18.184: INFO: Number of nodes with available pods: 0
Aug 22 20:10:18.184: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:19.310: INFO: Number of nodes with available pods: 0
Aug 22 20:10:19.310: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 20:10:20.361: INFO: Number of nodes with available pods: 1
Aug 22 20:10:20.361: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2547, will wait for the garbage collector to delete the pods
Aug 22 20:10:20.498: INFO: Deleting DaemonSet.extensions daemon-set took: 79.386441ms
Aug 22 20:10:20.799: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.252742ms
Aug 22 20:10:33.562: INFO: Number of nodes with available pods: 0
Aug 22 20:10:33.562: INFO: Number of running nodes: 0, number of available pods: 0
Aug 22 20:10:33.565: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2547/daemonsets","resourceVersion":"2561848"},"items":null}

Aug 22 20:10:33.873: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2547/pods","resourceVersion":"2561849"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:10:33.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2547" for this suite.

• [SLOW TEST:29.727 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":256,"skipped":4113,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:10:34.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:10:37.354: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:10:39.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723838, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:10:42.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723838, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:10:43.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723838, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:10:45.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723838, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723837, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:10:49.406: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:10:50.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1914" for this suite.
STEP: Destroying namespace "webhook-1914-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.731 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":257,"skipped":4132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:10:50.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:10:59.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-672" for this suite.

• [SLOW TEST:8.991 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":258,"skipped":4189,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:10:59.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-45091c1c-f3b5-4da9-8319-3a098392fec4
STEP: Creating a pod to test consume configMaps
Aug 22 20:11:00.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb" in namespace "configmap-9001" to be "success or failure"
Aug 22 20:11:00.712: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 240.014744ms
Aug 22 20:11:02.716: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243799588s
Aug 22 20:11:04.720: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248072771s
Aug 22 20:11:06.724: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb": Phase="Running", Reason="", readiness=true. Elapsed: 6.251946705s
Aug 22 20:11:08.729: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.256585822s
STEP: Saw pod success
Aug 22 20:11:08.729: INFO: Pod "pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb" satisfied condition "success or failure"
Aug 22 20:11:08.732: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb container configmap-volume-test: 
STEP: delete the pod
Aug 22 20:11:08.766: INFO: Waiting for pod pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb to disappear
Aug 22 20:11:08.778: INFO: Pod pod-configmaps-949bcbca-50cf-4bb7-b8d6-f45dae76c4eb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:08.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9001" for this suite.

• [SLOW TEST:8.904 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4214,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:08.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 20:11:09.394: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 20:11:11.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 20:11:13.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733723869, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 20:11:16.429: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:11:16.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:17.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4481" for this suite.
STEP: Destroying namespace "webhook-4481-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.880 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":260,"skipped":4236,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:17.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-b0ba29ba-8d04-4d53-a8a2-6e29e30f3ab1
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:27.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-692" for this suite.

• [SLOW TEST:10.245 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4283,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:27.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4908.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4908.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4908.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4908.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4908.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4908.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 20:11:39.445: INFO: DNS probes using dns-4908/dns-test-7cae66ef-3428-4eec-8e36-8e18096089fa succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:39.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4908" for this suite.

• [SLOW TEST:12.746 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":262,"skipped":4304,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:40.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Aug 22 20:11:41.733: INFO: Waiting up to 5m0s for pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390" in namespace "containers-9047" to be "success or failure"
Aug 22 20:11:41.892: INFO: Pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390": Phase="Pending", Reason="", readiness=false. Elapsed: 158.628756ms
Aug 22 20:11:44.198: INFO: Pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464897286s
Aug 22 20:11:46.215: INFO: Pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482428933s
Aug 22 20:11:48.245: INFO: Pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.511718256s
STEP: Saw pod success
Aug 22 20:11:48.245: INFO: Pod "client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390" satisfied condition "success or failure"
Aug 22 20:11:48.246: INFO: Trying to get logs from node jerma-worker pod client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390 container test-container: 
STEP: delete the pod
Aug 22 20:11:48.262: INFO: Waiting for pod client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390 to disappear
Aug 22 20:11:48.267: INFO: Pod client-containers-baba0dbd-23d0-43a2-a8f4-38dfe15f2390 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:48.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9047" for this suite.

• [SLOW TEST:7.616 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4305,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:48.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-d9048d3c-ee81-4774-9802-6dee9d455c87
STEP: Creating a pod to test consume configMaps
Aug 22 20:11:48.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b" in namespace "projected-3085" to be "success or failure"
Aug 22 20:11:48.705: INFO: Pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.509304ms
Aug 22 20:11:50.710: INFO: Pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036225134s
Aug 22 20:11:52.713: INFO: Pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.040078799s
Aug 22 20:11:54.718: INFO: Pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044231839s
STEP: Saw pod success
Aug 22 20:11:54.718: INFO: Pod "pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b" satisfied condition "success or failure"
Aug 22 20:11:54.721: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 20:11:54.929: INFO: Waiting for pod pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b to disappear
Aug 22 20:11:54.962: INFO: Pod pod-projected-configmaps-de975adc-ca6a-40a3-ae2f-1f79599dbd5b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:11:54.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3085" for this suite.

• [SLOW TEST:6.966 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4306,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:11:55.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 22 20:11:55.719: INFO: Waiting up to 5m0s for pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2" in namespace "containers-9616" to be "success or failure"
Aug 22 20:11:55.741: INFO: Pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.840912ms
Aug 22 20:11:57.751: INFO: Pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032451093s
Aug 22 20:11:59.856: INFO: Pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136921498s
Aug 22 20:12:01.903: INFO: Pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184727235s
STEP: Saw pod success
Aug 22 20:12:01.903: INFO: Pod "client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2" satisfied condition "success or failure"
Aug 22 20:12:01.906: INFO: Trying to get logs from node jerma-worker2 pod client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2 container test-container: 
STEP: delete the pod
Aug 22 20:12:02.097: INFO: Waiting for pod client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2 to disappear
Aug 22 20:12:02.366: INFO: Pod client-containers-e2b6ab49-a6c7-4a89-b6f3-0990bbc42de2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:12:02.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9616" for this suite.

• [SLOW TEST:7.179 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4340,"failed":0}
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:12:02.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:12:03.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf" in namespace "downward-api-8411" to be "success or failure"
Aug 22 20:12:03.840: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 553.621563ms
Aug 22 20:12:05.845: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558925909s
Aug 22 20:12:08.108: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.821557639s
Aug 22 20:12:10.162: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875964626s
Aug 22 20:12:12.166: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879903557s
Aug 22 20:12:14.170: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Running", Reason="", readiness=true. Elapsed: 10.884040989s
Aug 22 20:12:16.175: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.888533612s
STEP: Saw pod success
Aug 22 20:12:16.175: INFO: Pod "downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf" satisfied condition "success or failure"
Aug 22 20:12:16.178: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf container client-container: 
STEP: delete the pod
Aug 22 20:12:16.215: INFO: Waiting for pod downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf to disappear
Aug 22 20:12:16.228: INFO: Pod downwardapi-volume-b750e775-d9c7-4792-b2b2-ae65a17ee3cf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:12:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8411" for this suite.

• [SLOW TEST:13.818 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4340,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:12:16.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:12:16.345: INFO: Creating deployment "webserver-deployment"
Aug 22 20:12:16.348: INFO: Waiting for observed generation 1
Aug 22 20:12:18.857: INFO: Waiting for all required pods to come up
Aug 22 20:12:18.863: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 22 20:12:33.590: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 22 20:12:33.595: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 22 20:12:33.599: INFO: Updating deployment webserver-deployment
Aug 22 20:12:33.599: INFO: Waiting for observed generation 2
Aug 22 20:12:35.864: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 22 20:12:36.054: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 22 20:12:36.599: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 22 20:12:37.386: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 22 20:12:37.386: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 22 20:12:37.421: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 22 20:12:38.773: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 22 20:12:38.773: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 22 20:12:39.030: INFO: Updating deployment webserver-deployment
Aug 22 20:12:39.030: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 22 20:12:40.426: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 22 20:12:43.510: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 20:12:45.438: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5526 /apis/apps/v1/namespaces/deployment-5526/deployments/webserver-deployment b32fad63-9891-423d-aff0-3f7666643532 2562840 3 2020-08-22 20:12:16 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c132f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-22 20:12:40 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-22 20:12:41 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 22 20:12:45.960: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5526 /apis/apps/v1/namespaces/deployment-5526/replicasets/webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 2562829 3 2020-08-22 20:12:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b32fad63-9891-423d-aff0-3f7666643532 0xc003c137c7 0xc003c137c8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c13838  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 20:12:45.960: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 22 20:12:45.961: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5526 /apis/apps/v1/namespaces/deployment-5526/replicasets/webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 2562821 3 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b32fad63-9891-423d-aff0-3f7666643532 0xc003c13707 0xc003c13708}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c13768  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 22 20:12:46.752: INFO: Pod "webserver-deployment-595b5b9587-286l7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-286l7 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-286l7 9e08bb0f-98ff-4786-abfa-dcd61b1c083c 2562853 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aecaa7 0xc003aecaa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.752: INFO: Pod "webserver-deployment-595b5b9587-4k2nb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4k2nb webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-4k2nb 65bd3997-b6e4-4898-a089-8e25d4d30d8d 2562831 0 2020-08-22 20:12:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aecc07 0xc003aecc08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-9pmzs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9pmzs webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-9pmzs 2f7f1b67-5822-48fc-ad54-8bb007406f20 2562804 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aecd67 0xc003aecd68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-9vvmr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9vvmr webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-9vvmr 6fc2ac4b-b325-44d6-9367-677f42c0a8ef 2562816 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aece97 0xc003aece98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-bkplv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bkplv webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-bkplv d04507da-5139-4668-b66a-3dcff943315c 2562644 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aecfb7 0xc003aecfb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.204,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4dfd690d3c31b6a54a5a741efe4240b3a7a273ec9f5a733fc18ce3b418b3f99b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-bnlw2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bnlw2 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-bnlw2 a1c059ec-91f2-4eec-8b7e-ba58f2399ef2 2562655 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed137 0xc003aed138}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.206,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e971a5a24ed69fa98c59b27a594e4ce246dc3442d86d9dc29ef7a652d1e3c4fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-bxw2m" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bxw2m webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-bxw2m 6634f0a1-2392-44ad-aa2d-9431266e0ff0 2562887 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed2b7 0xc003aed2b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.753: INFO: Pod "webserver-deployment-595b5b9587-cnj86" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cnj86 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-cnj86 c624579c-d5c2-418d-bee6-d49f808b2710 2562664 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed417 0xc003aed418}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.205,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://143c9a811a0c0bf4cf2681d2fbb0590e50a6e657870fa11043c394fcdf7f9e4a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-dgn47" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dgn47 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-dgn47 c0dafd2b-aaa5-457d-80b8-d89c6210f57f 2562817 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed597 0xc003aed598}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-dhkkq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dhkkq webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-dhkkq 0c82f2a1-4b3e-4af3-a969-9cb84fc92e79 2562813 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed6b7 0xc003aed6b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-gqkms" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gqkms webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-gqkms 2d8f7210-8aca-482b-a217-f6ceba8137bb 2562802 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed7d7 0xc003aed7d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-jvb8k" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jvb8k webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-jvb8k 6a04be70-b1db-49b3-a83f-7fd6110962b9 2562809 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aed8f7 0xc003aed8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-k5b2f" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k5b2f webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-k5b2f 88979736-e3c7-4164-ad87-231a2629a17b 2562673 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aeda17 0xc003aeda18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.200,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8b3bed5a34d8f75b5169dd3c5f9a46325e66f8e61873d6b29c2bb74dce5dcdd7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.754: INFO: Pod "webserver-deployment-595b5b9587-l44bp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l44bp webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-l44bp 41082bb5-1b46-4afb-a269-4da7d46736ab 2562815 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aedbb7 0xc003aedbb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.755: INFO: Pod "webserver-deployment-595b5b9587-ld5r6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ld5r6 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-ld5r6 ccbca30e-5aef-4f62-8a46-baf0fe3380ac 2562666 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aedcd7 0xc003aedcd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.197,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0797889a2f378f14b1ca5050b04945b0e0d6d20fe8a880031e30f0f6f3abedb7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.755: INFO: Pod "webserver-deployment-595b5b9587-pb6rj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pb6rj webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-pb6rj 9764dbaf-0ad6-466c-b18e-356639baab2c 2562852 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc003aede77 0xc003aede78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.755: INFO: Pod "webserver-deployment-595b5b9587-pj2rn" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pj2rn webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-pj2rn 2812a890-20f7-45e1-8a71-a5de457506bf 2562652 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc0004d40e7 0xc0004d40e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.198,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec022d4db7c6afbeddafa5d72283e9343d6fb1fcbce52a0c44856cae0dd07f79,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.755: INFO: Pod "webserver-deployment-595b5b9587-vsdn6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vsdn6 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-vsdn6 f3c55892-08ab-4ebe-9c36-a1e8c7c5102a 2562676 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc0004d5ac7 0xc0004d5ac8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.207,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6fea4aeec2218bc96ec4345dffa340f5051e03a359c68f7799997b30437c7966,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.756: INFO: Pod "webserver-deployment-595b5b9587-vvdkd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vvdkd webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-vvdkd e1e83460-7a54-4973-b007-501ec6aaaa00 2562681 0 2020-08-22 20:12:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc002f6e107 0xc002f6e108}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.208,StartTime:2020-08-22 20:12:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 20:12:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1ad9958a6dc10387b08091177bfeef3a27998809c9b57a2c02f30b104411048,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.756: INFO: Pod "webserver-deployment-595b5b9587-w6p95" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w6p95 webserver-deployment-595b5b9587- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-595b5b9587-w6p95 a160bd93-d577-4e8f-bbf2-70504226aff0 2562863 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c4c40f27-f97d-4f51-808f-9a6a164a1db4 0xc002f6e287 0xc002f6e288}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.756: INFO: Pod "webserver-deployment-c7997dcc8-2grgv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2grgv webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-2grgv b8344632-f0bd-4562-97a4-98d0787946a9 2562857 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6e3e7 0xc002f6e3e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.756: INFO: Pod "webserver-deployment-c7997dcc8-2p5bx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2p5bx webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-2p5bx 3a229aa3-c290-460b-b6c2-6dbfcd15743e 2562812 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6e567 0xc002f6e568}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-5cbp9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5cbp9 webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-5cbp9 a71ac997-13aa-49ec-83ea-3b2c87f603bc 2562810 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6e697 0xc002f6e698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-9w2ct" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9w2ct webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-9w2ct 1aede6df-fa8d-45ef-93dd-f20464f43708 2562839 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6e7c7 0xc002f6e7c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-fdqc9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fdqc9 webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-fdqc9 3a0365bf-4297-4e6a-adbf-48f167f394ce 2562811 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6e947 0xc002f6e948}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-fz545" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fz545 webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-fz545 1863355d-447e-4f65-8fdb-e3401319c1b4 2562814 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6ea77 0xc002f6ea78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-lpmnv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lpmnv webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-lpmnv 30a4b8bd-df2f-42f8-bbe4-f13efa4e3544 2562883 0 2020-08-22 20:12:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6ebb7 0xc002f6ebb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.210,StartTime:2020-08-22 20:12:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-ngzkk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ngzkk webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-ngzkk dcb5de2d-d0ee-4b37-b1e4-c17af1cef727 2562824 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6ed87 0xc002f6ed88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.757: INFO: Pod "webserver-deployment-c7997dcc8-shn9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-shn9l webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-shn9l d45a0ff5-3324-47b8-9523-4ccd83cb49ff 2562767 0 2020-08-22 20:12:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6eeb7 0xc002f6eeb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.202,StartTime:2020-08-22 20:12:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.758: INFO: Pod "webserver-deployment-c7997dcc8-sqfdm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sqfdm webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-sqfdm b9f5b732-d459-48ac-a2cf-0d778f95ccd9 2562747 0 2020-08-22 20:12:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6f0b7 0xc002f6f0b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.758: INFO: Pod "webserver-deployment-c7997dcc8-wmh9h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wmh9h webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-wmh9h afd5c4b3-6ff4-4145-8a65-58cd52b6b7df 2562830 0 2020-08-22 20:12:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6f367 0xc002f6f368}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.209,StartTime:2020-08-22 20:12:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.758: INFO: Pod "webserver-deployment-c7997dcc8-x754m" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x754m webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-x754m 4bd8ed11-c86a-4df8-96ee-c07df07f2e5c 2562726 0 2020-08-22 20:12:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6f527 0xc002f6f528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 20:12:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 20:12:46.758: INFO: Pod "webserver-deployment-c7997dcc8-xxf9h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xxf9h webserver-deployment-c7997dcc8- deployment-5526 /api/v1/namespaces/deployment-5526/pods/webserver-deployment-c7997dcc8-xxf9h a1bab1a1-7ac7-4fda-b8ca-28234111164d 2562841 0 2020-08-22 20:12:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d0f937a4-5dda-4b84-acff-07a1d7aeecf1 0xc002f6f6b7 0xc002f6f6b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vjh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vjh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vjh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 20:12:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 20:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:12:46.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5526" for this suite.

• [SLOW TEST:31.050 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":267,"skipped":4357,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:12:47.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 22 20:12:49.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8259'
Aug 22 20:13:18.083: INFO: stderr: ""
Aug 22 20:13:18.083: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 22 20:13:19.324: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:19.324: INFO: Found 0 / 1
Aug 22 20:13:20.439: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:20.439: INFO: Found 0 / 1
Aug 22 20:13:21.156: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:21.156: INFO: Found 0 / 1
Aug 22 20:13:22.252: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:22.252: INFO: Found 0 / 1
Aug 22 20:13:23.145: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:23.145: INFO: Found 0 / 1
Aug 22 20:13:24.180: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:24.180: INFO: Found 0 / 1
Aug 22 20:13:25.342: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:25.342: INFO: Found 1 / 1
Aug 22 20:13:25.342: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 22 20:13:25.439: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:25.439: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 22 20:13:25.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-9djn6 --namespace=kubectl-8259 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 22 20:13:26.056: INFO: stderr: ""
Aug 22 20:13:26.056: INFO: stdout: "pod/agnhost-master-9djn6 patched\n"
STEP: checking annotations
Aug 22 20:13:26.091: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 20:13:26.091: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:13:26.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8259" for this suite.

• [SLOW TEST:39.001 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":268,"skipped":4362,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:13:26.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 20:13:27.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-25'
Aug 22 20:13:27.229: INFO: stderr: ""
Aug 22 20:13:27.229: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Aug 22 20:13:27.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-25'
Aug 22 20:13:32.129: INFO: stderr: ""
Aug 22 20:13:32.129: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:13:32.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-25" for this suite.

• [SLOW TEST:6.043 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":269,"skipped":4363,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:13:32.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 22 20:13:32.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 22 20:13:32.736: INFO: stderr: ""
Aug 22 20:13:32.736: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:13:32.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6665" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":270,"skipped":4373,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:13:32.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-529f246f-00ab-4aa8-a82a-ee3ada9346c8
STEP: Creating secret with name s-test-opt-upd-67460d9b-b1ea-4ade-bce5-4150afc55696
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-529f246f-00ab-4aa8-a82a-ee3ada9346c8
STEP: Updating secret s-test-opt-upd-67460d9b-b1ea-4ade-bce5-4150afc55696
STEP: Creating secret with name s-test-opt-create-70eeac00-bbde-4128-a2ff-6da206bee5a9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:14:58.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2282" for this suite.

• [SLOW TEST:85.780 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:14:58.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-04b377c0-9df5-4cc0-94bf-94c30e29930d in namespace container-probe-2803
Aug 22 20:15:04.834: INFO: Started pod liveness-04b377c0-9df5-4cc0-94bf-94c30e29930d in namespace container-probe-2803
STEP: checking the pod's current state and verifying that restartCount is present
Aug 22 20:15:04.842: INFO: Initial restart count of pod liveness-04b377c0-9df5-4cc0-94bf-94c30e29930d is 0
Aug 22 20:15:30.654: INFO: Restart count of pod container-probe-2803/liveness-04b377c0-9df5-4cc0-94bf-94c30e29930d is now 1 (25.812039882s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:15:30.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2803" for this suite.

• [SLOW TEST:33.148 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4419,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:15:31.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 20:15:32.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:15:38.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3872" for this suite.

• [SLOW TEST:6.679 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4436,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:15:38.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:15:38.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e" in namespace "downward-api-1418" to be "success or failure"
Aug 22 20:15:38.492: INFO: Pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.415084ms
Aug 22 20:15:40.497: INFO: Pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052691353s
Aug 22 20:15:42.564: INFO: Pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.119767897s
Aug 22 20:15:45.005: INFO: Pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.560458873s
STEP: Saw pod success
Aug 22 20:15:45.005: INFO: Pod "downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e" satisfied condition "success or failure"
Aug 22 20:15:45.009: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e container client-container: 
STEP: delete the pod
Aug 22 20:15:45.657: INFO: Waiting for pod downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e to disappear
Aug 22 20:15:45.751: INFO: Pod downwardapi-volume-64469545-d811-4cba-a21c-36b59488aa5e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:15:45.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1418" for this suite.

• [SLOW TEST:7.578 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4523,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:15:45.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 20:15:46.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1" in namespace "downward-api-1109" to be "success or failure"
Aug 22 20:15:46.355: INFO: Pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 133.912487ms
Aug 22 20:15:48.415: INFO: Pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194133333s
Aug 22 20:15:50.418: INFO: Pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1": Phase="Running", Reason="", readiness=true. Elapsed: 4.197440233s
Aug 22 20:15:52.423: INFO: Pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202251389s
STEP: Saw pod success
Aug 22 20:15:52.423: INFO: Pod "downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1" satisfied condition "success or failure"
Aug 22 20:15:52.427: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1 container client-container: 
STEP: delete the pod
Aug 22 20:15:52.784: INFO: Waiting for pod downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1 to disappear
Aug 22 20:15:52.830: INFO: Pod downwardapi-volume-9893876b-8bbd-4a7e-9eea-5eed72664bd1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:15:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1109" for this suite.

• [SLOW TEST:6.883 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4529,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:15:52.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-4430e6b7-4234-4762-9f23-319d43d94330 in namespace container-probe-6019
Aug 22 20:15:59.960: INFO: Started pod busybox-4430e6b7-4234-4762-9f23-319d43d94330 in namespace container-probe-6019
STEP: checking the pod's current state and verifying that restartCount is present
Aug 22 20:15:59.963: INFO: Initial restart count of pod busybox-4430e6b7-4234-4762-9f23-319d43d94330 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:20:00.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6019" for this suite.

• [SLOW TEST:248.002 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4536,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:20:00.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:20:01.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1488" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":277,"skipped":4548,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 20:20:01.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 22 20:20:05.242: INFO: Pod pod-hostip-38a75f1e-4d30-4028-af72-534e74843f54 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 20:20:05.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3524" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4551,"failed":0}
SSSSSSSSSSSSSSSAug 22 20:20:05.248: INFO: Running AfterSuite actions on all nodes
Aug 22 20:20:05.248: INFO: Running AfterSuite actions on node 1
Aug 22 20:20:05.248: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 6315.696 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS