I0821 00:11:00.915239 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0821 00:11:00.921680 7 e2e.go:109] Starting e2e run "1e49d60e-6a90-4523-993a-99c952e0eed9" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597968647 - Will randomize all specs Will run 278 of 4844 specs Aug 21 00:11:01.460: INFO: >>> kubeConfig: /root/.kube/config Aug 21 00:11:01.514: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 21 00:11:01.702: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 21 00:11:01.870: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 21 00:11:01.870: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 21 00:11:01.871: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 21 00:11:01.921: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 21 00:11:01.921: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 21 00:11:01.922: INFO: e2e test version: v1.17.11 Aug 21 00:11:01.925: INFO: kube-apiserver version: v1.17.5 Aug 21 00:11:01.926: INFO: >>> kubeConfig: /root/.kube/config Aug 21 00:11:01.945: INFO: Cluster IP family: ipv4 S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:01.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Aug 21 00:11:02.063: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 21 00:11:06.651: INFO: Successfully updated pod "pod-update-3c9139a4-fb23-4827-a858-bc5080b36e9d" STEP: verifying the updated pod is in kubernetes Aug 21 00:11:06.667: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:06.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3544" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:06.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-72840e52-4a36-4c14-bc86-5c259a88b6a9 STEP: Creating a pod to test consume configMaps Aug 21 00:11:06.813: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952" in namespace "projected-4033" to be "success or failure" Aug 21 00:11:06.833: INFO: Pod "pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952": Phase="Pending", Reason="", readiness=false. Elapsed: 19.815023ms Aug 21 00:11:08.842: INFO: Pod "pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028413828s Aug 21 00:11:11.054: INFO: Pod "pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.240589234s STEP: Saw pod success Aug 21 00:11:11.054: INFO: Pod "pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952" satisfied condition "success or failure" Aug 21 00:11:11.060: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952 container projected-configmap-volume-test: STEP: delete the pod Aug 21 00:11:11.094: INFO: Waiting for pod pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952 to disappear Aug 21 00:11:11.115: INFO: Pod pod-projected-configmaps-a64d9755-de4c-4f47-b948-a9c33f75d952 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:11.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4033" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:11.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:11:13.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565472, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565472, loc:(*time.Location)(0x726af60)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565473, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565473, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Aug 21 00:11:15.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565473, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565473, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565473, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565472, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:11:18.124: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:18.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3767" for this suite. STEP: Destroying namespace "webhook-3767-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":3,"skipped":88,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:18.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2767 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2767 STEP: creating replication controller externalsvc in namespace services-2767 I0821 00:11:18.922687 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2767, replica count: 2 I0821 00:11:21.978509 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:11:24.980083 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 21 00:11:25.126: INFO: Creating new exec pod Aug 21 00:11:29.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2767 execpodv4wv8 -- /bin/sh -x -c nslookup nodeport-service' Aug 21 00:11:33.288: INFO: stderr: "I0821 00:11:33.146202 33 log.go:172] (0x4000724000) (0x400079e6e0) Create stream\nI0821 00:11:33.149404 33 log.go:172] (0x4000724000) (0x400079e6e0) Stream added, broadcasting: 1\nI0821 00:11:33.162777 33 log.go:172] (0x4000724000) Reply frame received for 1\nI0821 00:11:33.163919 33 log.go:172] (0x4000724000) (0x40009880a0) Create stream\nI0821 00:11:33.164023 33 log.go:172] (0x4000724000) (0x40009880a0) Stream added, broadcasting: 3\nI0821 00:11:33.166315 33 log.go:172] (0x4000724000) Reply frame received for 3\nI0821 00:11:33.167126 33 log.go:172] (0x4000724000) (0x40009881e0) Create stream\nI0821 00:11:33.167265 33 log.go:172] (0x4000724000) (0x40009881e0) Stream added, broadcasting: 5\nI0821 00:11:33.169515 33 log.go:172] (0x4000724000) Reply frame received for 5\nI0821 00:11:33.250767 33 log.go:172] (0x4000724000) Data frame received for 5\nI0821 00:11:33.251145 33 log.go:172] (0x40009881e0) (5) Data frame handling\nI0821 00:11:33.251987 33 log.go:172] (0x40009881e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0821 00:11:33.259107 33 log.go:172] (0x4000724000) Data frame received for 3\nI0821 00:11:33.259265 33 log.go:172] (0x40009880a0) (3) Data frame handling\nI0821 00:11:33.259429 33 log.go:172] (0x40009880a0) (3) Data frame sent\nI0821 00:11:33.260145 33 log.go:172] (0x4000724000) Data frame received for 3\nI0821 00:11:33.260259 33 log.go:172] (0x40009880a0) (3) Data frame handling\nI0821 00:11:33.260371 33 log.go:172] (0x40009880a0) (3) Data frame sent\nI0821 00:11:33.260515 33 log.go:172] (0x4000724000) Data frame received for 3\nI0821 00:11:33.260673 33 log.go:172] (0x4000724000) Data frame received for 5\nI0821 00:11:33.260948 33 log.go:172] (0x40009881e0) (5) Data frame handling\nI0821 00:11:33.261185 33 log.go:172] (0x40009880a0) (3) Data frame handling\nI0821 00:11:33.262383 33 log.go:172] (0x4000724000) Data frame received for 1\nI0821 00:11:33.262584 33 log.go:172] (0x400079e6e0) (1) Data frame handling\nI0821 00:11:33.262775 33 log.go:172] (0x400079e6e0) (1) Data frame sent\nI0821 00:11:33.264173 33 log.go:172] (0x4000724000) (0x400079e6e0) Stream removed, broadcasting: 1\nI0821 00:11:33.266789 33 log.go:172] (0x4000724000) Go away received\nI0821 00:11:33.271016 33 log.go:172] (0x4000724000) (0x400079e6e0) Stream removed, broadcasting: 1\nI0821 00:11:33.271421 33 log.go:172] (0x4000724000) (0x40009880a0) Stream removed, broadcasting: 3\nI0821 00:11:33.271687 33 log.go:172] (0x4000724000) (0x40009881e0) Stream removed, broadcasting: 5\n" Aug 21 00:11:33.290: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2767.svc.cluster.local\tcanonical name = externalsvc.services-2767.svc.cluster.local.\nName:\texternalsvc.services-2767.svc.cluster.local\nAddress: 10.96.61.67\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2767, will wait for the garbage collector to delete the pods Aug 21 00:11:33.359: INFO: Deleting ReplicationController externalsvc took: 11.692536ms Aug 21 00:11:33.460: INFO: Terminating ReplicationController externalsvc pods took: 101.342408ms Aug 21 00:11:41.686: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:41.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2767" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.064 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":4,"skipped":96,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:41.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 21 00:11:41.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5003 /api/v1/namespaces/watch-5003/configmaps/e2e-watch-test-resource-version d8739d19-78eb-4867-acba-d460b34223a7 1969788 0 2020-08-21 00:11:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 00:11:41.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5003 /api/v1/namespaces/watch-5003/configmaps/e2e-watch-test-resource-version d8739d19-78eb-4867-acba-d460b34223a7 1969789 0 2020-08-21 00:11:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:41.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5003" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":5,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:41.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Aug 21 00:11:42.089: INFO: Waiting up to 5m0s for pod "pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7" in namespace "emptydir-3311" to be "success or failure" Aug 21 00:11:42.143: INFO: Pod "pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.110539ms Aug 21 00:11:44.149: INFO: Pod "pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060287053s Aug 21 00:11:46.161: INFO: Pod "pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072058939s STEP: Saw pod success Aug 21 00:11:46.161: INFO: Pod "pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7" satisfied condition "success or failure" Aug 21 00:11:46.166: INFO: Trying to get logs from node jerma-worker2 pod pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7 container test-container: STEP: delete the pod Aug 21 00:11:46.196: INFO: Waiting for pod pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7 to disappear Aug 21 00:11:46.201: INFO: Pod pod-9e9bb6ef-62a4-4b7c-8986-afa681777ee7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:11:46.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3311" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:11:46.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2666 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2666 I0821 00:11:46.560671 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2666, replica count: 2 I0821 00:11:49.612509 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:11:52.613367 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 21 00:11:52.613: INFO: Creating new exec pod Aug 21 00:11:57.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2666 execpodn79sn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 21 00:11:59.092: INFO: stderr: "I0821 00:11:58.984569 66 log.go:172] (0x4000112420) (0x40007efd60) Create stream\nI0821 00:11:58.986927 66 log.go:172] (0x4000112420) (0x40007efd60) Stream added, broadcasting: 1\nI0821 00:11:58.995689 66 log.go:172] (0x4000112420) Reply frame received for 1\nI0821 00:11:58.996234 66 log.go:172] (0x4000112420) (0x40007efe00) Create stream\nI0821 00:11:58.996290 66 log.go:172] (0x4000112420) (0x40007efe00) Stream added, broadcasting: 3\nI0821 00:11:58.997751 66 log.go:172] (0x4000112420) Reply frame received for 3\nI0821 00:11:58.997952 66 log.go:172] (0x4000112420) (0x40007efea0) Create stream\nI0821 00:11:58.998011 66 log.go:172] (0x4000112420) (0x40007efea0) Stream added, broadcasting: 5\nI0821 00:11:58.999936 66 log.go:172] (0x4000112420) Reply frame received for 5\nI0821 00:11:59.069931 66 log.go:172] (0x4000112420) Data frame received for 5\nI0821 00:11:59.070093 66 log.go:172] (0x4000112420) Data frame received for 3\nI0821 00:11:59.070684 66 log.go:172] (0x40007efea0) (5) Data frame handling\nI0821 00:11:59.070870 66 log.go:172] (0x40007efe00) (3) Data frame handling\nI0821 00:11:59.071814 66 log.go:172] (0x4000112420) Data frame received for 1\nI0821 00:11:59.071953 66 log.go:172] (0x40007efd60) (1) Data frame handling\nI0821 00:11:59.072181 66 log.go:172] (0x40007efea0) (5) Data frame sent\nI0821 00:11:59.072591 66 log.go:172] (0x4000112420) Data frame received for 5\nI0821 00:11:59.072660 66 log.go:172] (0x40007efea0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0821 00:11:59.073351 66 log.go:172] (0x40007efd60) (1) Data frame sent\nI0821 00:11:59.074551 66 log.go:172] (0x4000112420) (0x40007efd60) Stream removed, broadcasting: 1\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 00:11:59.077085 66 log.go:172] (0x40007efea0) (5) Data frame sent\nI0821 00:11:59.077262 66 log.go:172] (0x4000112420) Data frame received for 5\nI0821 00:11:59.077419 66 log.go:172] (0x40007efea0) (5) Data frame handling\nI0821 00:11:59.077801 66 log.go:172] (0x4000112420) Go away received\nI0821 00:11:59.080871 66 log.go:172] (0x4000112420) (0x40007efd60) Stream removed, broadcasting: 1\nI0821 00:11:59.081142 66 log.go:172] (0x4000112420) (0x40007efe00) Stream removed, broadcasting: 3\nI0821 00:11:59.081355 66 log.go:172] (0x4000112420) (0x40007efea0) Stream removed, broadcasting: 5\n" Aug 21 00:11:59.094: INFO: stdout: "" Aug 21 00:11:59.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2666 execpodn79sn -- /bin/sh -x -c nc -zv -t -w 2 10.102.198.195 80' Aug 21 00:12:00.714: INFO: stderr: "I0821 00:12:00.595513 88 log.go:172] (0x4000110370) (0x400092c000) Create stream\nI0821 00:12:00.600515 88 log.go:172] (0x4000110370) (0x400092c000) Stream added, broadcasting: 1\nI0821 00:12:00.615906 88 log.go:172] (0x4000110370) Reply frame received for 1\nI0821 00:12:00.616938 88 log.go:172] (0x4000110370) (0x4000a30000) Create stream\nI0821 00:12:00.617055 88 log.go:172] (0x4000110370) (0x4000a30000) Stream added, broadcasting: 3\nI0821 00:12:00.619275 88 log.go:172] (0x4000110370) Reply frame received for 3\nI0821 00:12:00.619873 88 log.go:172] (0x4000110370) (0x40007e5a40) Create stream\nI0821 00:12:00.620019 88 log.go:172] (0x4000110370) (0x40007e5a40) Stream added, broadcasting: 5\nI0821 00:12:00.621906 88 log.go:172] (0x4000110370) Reply frame received for 5\nI0821 00:12:00.690646 88 log.go:172] (0x4000110370) Data frame received for 5\nI0821 00:12:00.690949 88 log.go:172] (0x4000110370) Data frame received for 3\nI0821 00:12:00.691182 88 log.go:172] (0x4000110370) Data frame received for 1\nI0821 00:12:00.691336 88 log.go:172] (0x400092c000) (1) Data frame handling\nI0821 00:12:00.691483 88 log.go:172] (0x4000a30000) (3) Data frame handling\nI0821 00:12:00.691721 88 log.go:172] (0x40007e5a40) (5) Data frame handling\nI0821 00:12:00.693210 88 log.go:172] (0x400092c000) (1) Data frame sent\nI0821 00:12:00.693804 88 log.go:172] (0x40007e5a40) (5) Data frame sent\nI0821 00:12:00.693910 88 log.go:172] (0x4000110370) Data frame received for 5\nI0821 00:12:00.693982 88 log.go:172] (0x40007e5a40) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.198.195 80\nConnection to 10.102.198.195 80 port [tcp/http] succeeded!\nI0821 00:12:00.696538 88 log.go:172] (0x4000110370) (0x400092c000) Stream removed, broadcasting: 1\nI0821 00:12:00.698066 88 log.go:172] (0x4000110370) Go away received\nI0821 00:12:00.702434 88 log.go:172] (0x4000110370) (0x400092c000) Stream removed, broadcasting: 1\nI0821 00:12:00.702959 88 log.go:172] (0x4000110370) (0x4000a30000) Stream removed, broadcasting: 3\nI0821 00:12:00.703297 88 log.go:172] (0x4000110370) (0x40007e5a40) Stream removed, broadcasting: 5\n" Aug 21 00:12:00.715: INFO: stdout: "" Aug 21 00:12:00.716: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2666" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.754 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":7,"skipped":148,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:00.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 21 00:12:01.314: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:01.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9644" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":8,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:01.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:01.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8682" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":9,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-2ac72052-f4db-4a94-a397-024fa4345236 STEP: Creating secret with name secret-projected-all-test-volume-baeaab05-6e44-416c-89aa-d8e9055713ef STEP: Creating a pod to test Check all projections for projected volume plugin Aug 21 00:12:01.934: INFO: Waiting up to 5m0s for pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c" in namespace "projected-5979" to be "success or failure" Aug 21 00:12:02.022: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c": Phase="Pending", Reason="", readiness=false. Elapsed: 88.317216ms Aug 21 00:12:04.150: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21632258s Aug 21 00:12:06.215: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280814472s Aug 21 00:12:08.372: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c": Phase="Running", Reason="", readiness=true. Elapsed: 6.437995271s Aug 21 00:12:10.558: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.62365194s STEP: Saw pod success Aug 21 00:12:10.558: INFO: Pod "projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c" satisfied condition "success or failure" Aug 21 00:12:10.628: INFO: Trying to get logs from node jerma-worker pod projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c container projected-all-volume-test: STEP: delete the pod Aug 21 00:12:11.001: INFO: Waiting for pod projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c to disappear Aug 21 00:12:11.005: INFO: Pod projected-volume-eb34a956-d859-4e30-b048-4d8f2266814c no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:11.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5979" for this suite. • [SLOW TEST:9.288 seconds] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":10,"skipped":253,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:11.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:12:11.510: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:15.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6395" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":11,"skipped":261,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:15.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 21 00:12:15.666: INFO: Waiting up to 5m0s for pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4" in namespace "emptydir-8835" to be "success or failure" Aug 21 00:12:15.688: INFO: Pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.386023ms Aug 21 00:12:17.694: INFO: Pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027818544s Aug 21 00:12:19.759: INFO: Pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4": Phase="Running", Reason="", readiness=true. Elapsed: 4.092868961s Aug 21 00:12:21.766: INFO: Pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099610283s STEP: Saw pod success Aug 21 00:12:21.766: INFO: Pod "pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4" satisfied condition "success or failure" Aug 21 00:12:21.771: INFO: Trying to get logs from node jerma-worker pod pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4 container test-container: STEP: delete the pod Aug 21 00:12:21.797: INFO: Waiting for pod pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4 to disappear Aug 21 00:12:21.819: INFO: Pod pod-8e10acd2-b98e-48a1-b74f-dc4c4d0522d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8835" for this suite. • [SLOW TEST:6.279 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":264,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:21.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:12:22.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d" in namespace "downward-api-6878" to be "success or failure" Aug 21 00:12:22.144: INFO: Pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.376438ms Aug 21 00:12:24.197: INFO: Pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088450884s Aug 21 00:12:26.354: INFO: Pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245013773s Aug 21 00:12:28.361: INFO: Pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.252791904s STEP: Saw pod success Aug 21 00:12:28.362: INFO: Pod "downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d" satisfied condition "success or failure" Aug 21 00:12:28.368: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d container client-container: STEP: delete the pod Aug 21 00:12:28.388: INFO: Waiting for pod downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d to disappear Aug 21 00:12:28.392: INFO: Pod downwardapi-volume-571b672f-9794-4276-a1de-ffb42cd6988d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:28.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6878" for this suite. • [SLOW TEST:6.572 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:28.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 21 00:12:28.460: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 21 00:12:28.513: INFO: Waiting for terminating namespaces to be deleted... Aug 21 00:12:28.521: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 21 00:12:28.535: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.536: INFO: Container app ready: true, restart count 0 Aug 21 00:12:28.536: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.536: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:12:28.536: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.536: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:12:28.536: INFO: rally-99986c3c-eie0zm10-2257b from c-rally-99986c3c-chqrgha5 started at 2020-08-21 00:11:55 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.536: INFO: Container rally-99986c3c-eie0zm10 ready: false, restart count 0 Aug 21 00:12:28.536: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 21 00:12:28.548: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.548: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:12:28.548: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.548: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:12:28.549: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.549: INFO: Container app ready: true, restart count 0 Aug 21 00:12:28.549: INFO: rally-99986c3c-eie0zm10-j2mhn from c-rally-99986c3c-chqrgha5 started at 2020-08-21 00:11:55 +0000 UTC (1 container statuses recorded) Aug 21 00:12:28.549: INFO: Container rally-99986c3c-eie0zm10 ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-20b96ebd-dfc7-4985-a550-114fe05287c4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-20b96ebd-dfc7-4985-a550-114fe05287c4 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-20b96ebd-dfc7-4985-a550-114fe05287c4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:36.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4794" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":14,"skipped":287,"failed":0} [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5329.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5329.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5329.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5329.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 00:12:45.103: INFO: DNS probes using dns-5329/dns-test-b33c657c-5580-4603-b2cd-40ea20a7ac81 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:45.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5329" for this suite. • [SLOW TEST:8.904 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":15,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:45.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:12:45.826: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:46.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2596" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":16,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:46.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 21 00:12:53.449: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:12:53.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6369" for this suite. • [SLOW TEST:6.752 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:12:53.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 21 00:12:53.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970547 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 00:12:53.639: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970548 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 21 00:12:53.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970549 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 21 00:13:04.876: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970629 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 00:13:04.877: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970630 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 21 00:13:04.878: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5205 /api/v1/namespaces/watch-5205/configmaps/e2e-watch-test-label-changed fbeab0fd-06e9-46d8-beaa-aed2770349c6 1970632 0 2020-08-21 00:12:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:13:04.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5205" for this suite. • [SLOW TEST:12.028 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":18,"skipped":384,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:13:05.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 00:13:14.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 00:13:14.870: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 00:13:16.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 00:13:16.893: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 00:13:18.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 00:13:18.877: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 00:13:20.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 00:13:20.935: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 00:13:22.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 00:13:22.907: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:13:22.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3280" for this suite. • [SLOW TEST:17.463 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:13:22.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:13:23.101: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57" in namespace "security-context-test-7717" to be "success or failure" Aug 21 00:13:23.181: INFO: Pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57": Phase="Pending", Reason="", readiness=false. Elapsed: 79.610626ms Aug 21 00:13:25.188: INFO: Pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087046928s Aug 21 00:13:27.194: INFO: Pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09304696s Aug 21 00:13:29.200: INFO: Pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098614124s Aug 21 00:13:29.200: INFO: Pod "busybox-user-65534-b9bba083-840a-424a-a6d4-63edc66cfa57" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:13:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7717" for this suite. • [SLOW TEST:6.278 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":410,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:13:29.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 21 00:13:36.405: INFO: Successfully updated pod "adopt-release-926jq" STEP: Checking that the Job readopts the Pod Aug 21 00:13:36.406: INFO: Waiting up to 15m0s for pod "adopt-release-926jq" in namespace "job-3286" to be "adopted" Aug 21 00:13:36.410: INFO: Pod "adopt-release-926jq": Phase="Running", Reason="", readiness=true. Elapsed: 4.288009ms Aug 21 00:13:38.418: INFO: Pod "adopt-release-926jq": Phase="Running", Reason="", readiness=true. Elapsed: 2.011852681s Aug 21 00:13:38.419: INFO: Pod "adopt-release-926jq" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 21 00:13:38.933: INFO: Successfully updated pod "adopt-release-926jq" STEP: Checking that the Job releases the Pod Aug 21 00:13:38.934: INFO: Waiting up to 15m0s for pod "adopt-release-926jq" in namespace "job-3286" to be "released" Aug 21 00:13:38.942: INFO: Pod "adopt-release-926jq": Phase="Running", Reason="", readiness=true. Elapsed: 8.045509ms Aug 21 00:13:38.942: INFO: Pod "adopt-release-926jq" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:13:38.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3286" for this suite. • [SLOW TEST:9.724 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":21,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:13:38.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 21 00:13:39.082: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 21 00:13:39.129: INFO: Waiting for terminating namespaces to be deleted... Aug 21 00:13:39.134: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 21 00:13:39.149: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.149: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:13:39.149: INFO: adopt-release-926jq from job-3286 started at 2020-08-21 00:13:29 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.149: INFO: Container c ready: true, restart count 0 Aug 21 00:13:39.149: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.149: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:13:39.149: INFO: pod-handle-http-request from container-lifecycle-hook-3280 started at 2020-08-21 00:13:06 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.149: INFO: Container pod-handle-http-request ready: false, restart count 0 Aug 21 00:13:39.149: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.149: INFO: Container app ready: true, restart count 0 Aug 21 00:13:39.149: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 21 00:13:39.165: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.165: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:13:39.165: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.166: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:13:39.166: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.166: INFO: Container app ready: true, restart count 0 Aug 21 00:13:39.166: INFO: adopt-release-knvrr from job-3286 started at 2020-08-21 00:13:29 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.166: INFO: Container c ready: true, restart count 0 Aug 21 00:13:39.166: INFO: adopt-release-w75lb from job-3286 started at 2020-08-21 00:13:39 +0000 UTC (1 container statuses recorded) Aug 21 00:13:39.166: INFO: Container c ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker Aug 21 00:13:39.304: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker Aug 21 00:13:39.304: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod adopt-release-926jq requesting resource cpu=0m on Node jerma-worker Aug 21 00:13:39.304: INFO: Pod adopt-release-knvrr requesting resource cpu=0m on Node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod adopt-release-w75lb requesting resource cpu=0m on Node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker Aug 21 00:13:39.304: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2 Aug 21 00:13:39.304: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Aug 21 00:13:39.305: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Aug 21 00:13:39.315: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-227e42d0-c924-4044-a436-e12b95d380ba.162d202cc22a6255], Reason = [Scheduled], Message = [Successfully assigned sched-pred-24/filler-pod-227e42d0-c924-4044-a436-e12b95d380ba to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-227e42d0-c924-4044-a436-e12b95d380ba.162d202d5f45d3f8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-227e42d0-c924-4044-a436-e12b95d380ba.162d202daa55a865], Reason = [Created], Message = [Created container filler-pod-227e42d0-c924-4044-a436-e12b95d380ba] STEP: Considering event: Type = [Normal], Name = [filler-pod-227e42d0-c924-4044-a436-e12b95d380ba.162d202db8a688dd], Reason = [Started], Message = [Started container filler-pod-227e42d0-c924-4044-a436-e12b95d380ba] STEP: Considering event: Type = [Normal], Name = [filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b.162d202cbdfd7ade], Reason = [Scheduled], Message = [Successfully assigned sched-pred-24/filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b.162d202d0a936433], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b.162d202d74a663be], Reason = [Created], Message = [Created container filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b] STEP: Considering event: Type = [Normal], Name = [filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b.162d202d8dc135f6], Reason = [Started], Message = [Started container filler-pod-456373b0-9977-4ce5-aae4-05d8b6373a6b] STEP: Considering event: Type = [Warning], Name = [additional-pod.162d202e2da756c8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:13:46.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-24" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.641 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":22,"skipped":435,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:13:46.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-1f6bbe14-e73a-47bd-98cc-26e3dd157d95 in namespace container-probe-6394 Aug 21 00:13:51.028: INFO: Started pod liveness-1f6bbe14-e73a-47bd-98cc-26e3dd157d95 in namespace container-probe-6394 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 00:13:51.034: INFO: Initial restart count of pod liveness-1f6bbe14-e73a-47bd-98cc-26e3dd157d95 is 0 Aug 21 00:14:11.111: INFO: Restart count of pod container-probe-6394/liveness-1f6bbe14-e73a-47bd-98cc-26e3dd157d95 is now 1 (20.076634906s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:14:11.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6394" for this suite. • [SLOW TEST:24.525 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:14:11.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d in namespace container-probe-1384 Aug 21 00:14:15.562: INFO: Started pod liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d in namespace container-probe-1384 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 00:14:15.590: INFO: Initial restart count of pod liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is 0 Aug 21 00:14:29.769: INFO: Restart count of pod container-probe-1384/liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is now 1 (14.178672058s elapsed) Aug 21 00:14:50.074: INFO: Restart count of pod container-probe-1384/liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is now 2 (34.483733179s elapsed) Aug 21 00:15:10.337: INFO: Restart count of pod container-probe-1384/liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is now 3 (54.7469497s elapsed) Aug 21 00:15:28.406: INFO: Restart count of pod container-probe-1384/liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is now 4 (1m12.815201281s elapsed) Aug 21 00:16:32.727: INFO: Restart count of pod container-probe-1384/liveness-8bfb5645-3826-4a95-b263-ac3986c14f3d is now 5 (2m17.136410008s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:16:32.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1384" for this suite. • [SLOW TEST:141.606 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":471,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:16:32.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:16:33.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7844" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":25,"skipped":480,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:16:33.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:16:36.088: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:16:38.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565796, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565796, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565796, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565796, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:16:41.266: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 21 00:16:45.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2015 to-be-attached-pod -i -c=container1' Aug 21 00:16:46.708: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:16:46.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2015" for this suite. STEP: Destroying namespace "webhook-2015-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":26,"skipped":490,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:16:46.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:17:46.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1214" for this suite. • [SLOW TEST:60.088 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:17:47.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:17:47.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04" in namespace "downward-api-6269" to be "success or failure" Aug 21 00:17:47.122: INFO: Pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04": Phase="Pending", Reason="", readiness=false. Elapsed: 9.34616ms Aug 21 00:17:49.128: INFO: Pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01541934s Aug 21 00:17:51.134: INFO: Pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04": Phase="Running", Reason="", readiness=true. Elapsed: 4.022146238s Aug 21 00:17:53.368: INFO: Pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255604511s STEP: Saw pod success Aug 21 00:17:53.368: INFO: Pod "downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04" satisfied condition "success or failure" Aug 21 00:17:53.437: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04 container client-container: STEP: delete the pod Aug 21 00:17:53.572: INFO: Waiting for pod downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04 to disappear Aug 21 00:17:53.812: INFO: Pod downwardapi-volume-828dbd8a-82f3-4648-b5e0-a9ab6e9b7a04 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:17:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6269" for this suite. • [SLOW TEST:6.913 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":524,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:17:53.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:17:56.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:17:58.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565876, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565876, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565876, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565876, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:18:01.710: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 21 00:18:01.743: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:01.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3408" for this suite. STEP: Destroying namespace "webhook-3408-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.011 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":29,"skipped":529,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:01.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Aug 21 00:18:02.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1622' Aug 21 00:18:03.704: INFO: stderr: "" Aug 21 00:18:03.704: INFO: stdout: "pod/pause created\n" Aug 21 00:18:03.704: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 21 00:18:03.705: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1622" to be "running and ready" Aug 21 00:18:03.751: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 46.533659ms Aug 21 00:18:05.817: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11269272s Aug 21 00:18:07.825: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.120131066s Aug 21 00:18:07.825: INFO: Pod "pause" satisfied condition "running and ready" Aug 21 00:18:07.826: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Aug 21 00:18:07.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1622' Aug 21 00:18:09.070: INFO: stderr: "" Aug 21 00:18:09.070: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 21 00:18:09.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1622' Aug 21 00:18:10.289: INFO: stderr: "" Aug 21 00:18:10.289: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 21 00:18:10.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1622' Aug 21 00:18:11.541: INFO: stderr: "" Aug 21 00:18:11.541: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 21 00:18:11.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1622' Aug 21 00:18:12.787: INFO: stderr: "" Aug 21 00:18:12.787: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Aug 21 00:18:12.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1622' Aug 21 00:18:14.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 00:18:14.064: INFO: stdout: "pod \"pause\" force deleted\n" Aug 21 00:18:14.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1622' Aug 21 00:18:15.331: INFO: stderr: "No resources found in kubectl-1622 namespace.\n" Aug 21 00:18:15.331: INFO: stdout: "" Aug 21 00:18:15.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1622 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 00:18:16.558: INFO: stderr: "" Aug 21 00:18:16.559: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:16.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1622" for this suite. • [SLOW TEST:14.626 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":30,"skipped":543,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:16.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:32.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1755" for this suite. • [SLOW TEST:16.286 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":31,"skipped":545,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:32.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5399" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":553,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:36.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-106156bc-69b8-49d0-b3ee-1f24bd0bdd50 STEP: Creating a pod to test consume configMaps Aug 21 00:18:37.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0" in namespace "configmap-9373" to be "success or failure" Aug 21 00:18:37.160: INFO: Pod "pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.430313ms Aug 21 00:18:39.213: INFO: Pod "pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07599003s Aug 21 00:18:41.220: INFO: Pod "pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08298148s STEP: Saw pod success Aug 21 00:18:41.220: INFO: Pod "pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0" satisfied condition "success or failure" Aug 21 00:18:41.225: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0 container configmap-volume-test: STEP: delete the pod Aug 21 00:18:41.276: INFO: Waiting for pod pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0 to disappear Aug 21 00:18:41.279: INFO: Pod pod-configmaps-0c8a29c6-d1de-4fb3-bd6c-b9b2d3da22a0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:41.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9373" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":566,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:41.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Aug 21 00:18:41.377: INFO: Waiting up to 5m0s for pod "client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1" in namespace "containers-3011" to be "success or failure" Aug 21 00:18:41.387: INFO: Pod "client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.52334ms Aug 21 00:18:43.434: INFO: Pod "client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057130587s Aug 21 00:18:45.442: INFO: Pod "client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065335698s STEP: Saw pod success Aug 21 00:18:45.443: INFO: Pod "client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1" satisfied condition "success or failure" Aug 21 00:18:45.448: INFO: Trying to get logs from node jerma-worker2 pod client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1 container test-container: STEP: delete the pod Aug 21 00:18:45.490: INFO: Waiting for pod client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1 to disappear Aug 21 00:18:45.500: INFO: Pod client-containers-08b89b2b-11c2-4c4e-ace7-f65b550791f1 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:45.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3011" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":567,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Aug 21 00:18:45.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4085' Aug 21 00:18:47.181: INFO: stderr: "" Aug 21 00:18:47.181: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 21 00:18:48.190: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:18:48.191: INFO: Found 0 / 1 Aug 21 00:18:49.190: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:18:49.190: INFO: Found 0 / 1 Aug 21 00:18:50.190: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:18:50.191: INFO: Found 1 / 1 Aug 21 00:18:50.191: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 21 00:18:50.197: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:18:50.197: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 00:18:50.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-r7jvf --namespace=kubectl-4085 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 21 00:18:51.448: INFO: stderr: "" Aug 21 00:18:51.449: INFO: stdout: "pod/agnhost-master-r7jvf patched\n" STEP: checking annotations Aug 21 00:18:51.456: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:18:51.456: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:51.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4085" for this suite. • [SLOW TEST:5.953 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433 should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":35,"skipped":578,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:51.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 21 00:18:51.571: INFO: Waiting up to 5m0s for pod "downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad" in namespace "downward-api-4597" to be "success or failure" Aug 21 00:18:51.601: INFO: Pod "downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad": Phase="Pending", Reason="", readiness=false. Elapsed: 29.521381ms Aug 21 00:18:53.627: INFO: Pod "downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055496648s Aug 21 00:18:55.650: INFO: Pod "downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078990327s STEP: Saw pod success Aug 21 00:18:55.650: INFO: Pod "downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad" satisfied condition "success or failure" Aug 21 00:18:55.654: INFO: Trying to get logs from node jerma-worker2 pod downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad container dapi-container: STEP: delete the pod Aug 21 00:18:55.700: INFO: Waiting for pod downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad to disappear Aug 21 00:18:55.914: INFO: Pod downward-api-7caecd15-56c8-40fd-898b-77d781fd59ad no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:18:55.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4597" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:18:55.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:19:13.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2726" for this suite. • [SLOW TEST:17.416 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":37,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:19:13.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:19:15.042: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:19:17.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:19:19.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565955, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:19:22.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:19:22.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6593-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:19:23.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1252" for this suite. STEP: Destroying namespace "webhook-1252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.714 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":38,"skipped":636,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:19:24.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 21 00:19:24.115: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 21 00:19:24.239: INFO: Waiting for terminating namespaces to be deleted... Aug 21 00:19:24.244: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 21 00:19:24.260: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.260: INFO: Container app ready: true, restart count 0 Aug 21 00:19:24.260: INFO: rally-fabfd9a7-elqcysr8 from c-rally-fabfd9a7-r1unmvn3 started at 2020-08-21 00:19:06 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.261: INFO: Container rally-fabfd9a7-elqcysr8 ready: true, restart count 0 Aug 21 00:19:24.261: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.261: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:19:24.261: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.261: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:19:24.261: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 21 00:19:24.281: INFO: sample-webhook-deployment-5f65f8c764-blbzg from webhook-1252 started at 2020-08-21 00:19:15 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.281: INFO: Container sample-webhook ready: true, restart count 0 Aug 21 00:19:24.281: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.281: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 00:19:24.282: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.282: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 00:19:24.282: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 21 00:19:24.282: INFO: Container app ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162d207d1ac1ca71], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:19:25.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6654" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":39,"skipped":642,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:19:25.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:19:25.774: INFO: Creating ReplicaSet my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565 Aug 21 00:19:25.833: INFO: Pod name my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565: Found 0 pods out of 1 Aug 21 00:19:31.035: INFO: Pod name my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565: Found 1 pods out of 1 Aug 21 00:19:31.035: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565" is running Aug 21 00:19:31.274: INFO: Pod "my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565-4sgc5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:19:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:19:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:19:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:19:25 +0000 UTC Reason: Message:}]) Aug 21 00:19:31.275: INFO: Trying to dial the pod Aug 21 00:19:36.296: INFO: Controller my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565: Got expected result from replica 1 [my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565-4sgc5]: "my-hostname-basic-e61bba9e-6583-4242-8a03-ca1f09f3f565-4sgc5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:19:36.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5483" for this suite. • [SLOW TEST:10.762 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":40,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:19:36.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2793.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2793.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2793.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2793.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 147.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.147_udp@PTR;check="$$(dig +tcp +noall +answer +search 147.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.147_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2793.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2793.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2793.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2793.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2793.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2793.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 147.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.147_udp@PTR;check="$$(dig +tcp +noall +answer +search 147.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.147_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 00:19:45.386: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.391: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.394: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.397: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.424: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.461: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:45.539: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:19:50.545: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.553: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.557: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.588: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.595: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:50.616: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:19:55.607: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.616: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.646: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.650: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.654: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.657: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:19:55.734: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:20:00.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.553: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.557: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.581: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.586: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.589: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:00.608: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:20:05.545: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.554: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.589: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.593: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.602: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:05.635: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:20:10.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.639: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.699: INFO: Unable to read jessie_udp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.707: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.711: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local from pod dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb: the server could not find the requested resource (get pods dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb) Aug 21 00:20:10.730: INFO: Lookups using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb failed for: [wheezy_udp@dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@dns-test-service.dns-2793.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_udp@dns-test-service.dns-2793.svc.cluster.local jessie_tcp@dns-test-service.dns-2793.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2793.svc.cluster.local] Aug 21 00:20:15.807: INFO: DNS probes using dns-2793/dns-test-3828d23c-7be8-46a6-8786-6844f4ecf1fb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:20:16.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2793" for this suite. • [SLOW TEST:40.162 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":41,"skipped":724,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:20:16.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:20:22.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7495" for this suite. • [SLOW TEST:6.192 seconds] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":736,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:20:22.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 21 00:20:23.591: INFO: Pod name wrapped-volume-race-f0362f39-4bb8-4d3e-8251-d469adbfa9d5: Found 0 pods out of 5 Aug 21 00:20:28.610: INFO: Pod name wrapped-volume-race-f0362f39-4bb8-4d3e-8251-d469adbfa9d5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f0362f39-4bb8-4d3e-8251-d469adbfa9d5 in namespace emptydir-wrapper-7998, will wait for the garbage collector to delete the pods Aug 21 00:20:42.933: INFO: Deleting ReplicationController wrapped-volume-race-f0362f39-4bb8-4d3e-8251-d469adbfa9d5 took: 9.865185ms Aug 21 00:20:43.234: INFO: Terminating ReplicationController wrapped-volume-race-f0362f39-4bb8-4d3e-8251-d469adbfa9d5 pods took: 300.862533ms STEP: Creating RC which spawns configmap-volume pods Aug 21 00:20:52.193: INFO: Pod name wrapped-volume-race-e2e83097-2cb5-486c-a791-7e4e0010f3f9: Found 1 pods out of 5 Aug 21 00:20:57.216: INFO: Pod name wrapped-volume-race-e2e83097-2cb5-486c-a791-7e4e0010f3f9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e2e83097-2cb5-486c-a791-7e4e0010f3f9 in namespace emptydir-wrapper-7998, will wait for the garbage collector to delete the pods Aug 21 00:21:09.328: INFO: Deleting ReplicationController wrapped-volume-race-e2e83097-2cb5-486c-a791-7e4e0010f3f9 took: 9.428095ms Aug 21 00:21:09.729: INFO: Terminating ReplicationController wrapped-volume-race-e2e83097-2cb5-486c-a791-7e4e0010f3f9 pods took: 400.948095ms STEP: Creating RC which spawns configmap-volume pods Aug 21 00:21:21.876: INFO: Pod name wrapped-volume-race-c8ec0b34-aca6-40f1-8d1c-ee55c572bf7c: Found 0 pods out of 5 Aug 21 00:21:26.892: INFO: Pod name wrapped-volume-race-c8ec0b34-aca6-40f1-8d1c-ee55c572bf7c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c8ec0b34-aca6-40f1-8d1c-ee55c572bf7c in namespace emptydir-wrapper-7998, will wait for the garbage collector to delete the pods Aug 21 00:21:43.045: INFO: Deleting ReplicationController wrapped-volume-race-c8ec0b34-aca6-40f1-8d1c-ee55c572bf7c took: 10.005518ms Aug 21 00:21:43.446: INFO: Terminating ReplicationController wrapped-volume-race-c8ec0b34-aca6-40f1-8d1c-ee55c572bf7c pods took: 400.685391ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:21:52.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7998" for this suite. • [SLOW TEST:90.271 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":43,"skipped":747,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:21:52.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-59cc9feb-1d1b-4f32-844b-39326a7c45aa STEP: Creating a pod to test consume secrets Aug 21 00:21:53.067: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66" in namespace "projected-8602" to be "success or failure" Aug 21 00:21:53.074: INFO: Pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66": Phase="Pending", Reason="", readiness=false. Elapsed: 7.320334ms Aug 21 00:21:55.125: INFO: Pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057897789s Aug 21 00:21:57.131: INFO: Pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064389426s Aug 21 00:21:59.172: INFO: Pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105354659s STEP: Saw pod success Aug 21 00:21:59.172: INFO: Pod "pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66" satisfied condition "success or failure" Aug 21 00:21:59.191: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66 container secret-volume-test: STEP: delete the pod Aug 21 00:21:59.570: INFO: Waiting for pod pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66 to disappear Aug 21 00:21:59.586: INFO: Pod pod-projected-secrets-9d59acbb-139d-49de-b8be-c91deec47f66 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:21:59.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8602" for this suite. • [SLOW TEST:6.670 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":749,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:21:59.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 00:21:59.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5621' Aug 21 00:22:07.195: INFO: stderr: "" Aug 21 00:22:07.195: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 21 00:22:07.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5621' Aug 21 00:22:11.588: INFO: stderr: "" Aug 21 00:22:11.589: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:22:11.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5621" for this suite. • [SLOW TEST:11.989 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":45,"skipped":752,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:22:11.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:22:17.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6203" for this suite. STEP: Destroying namespace "nsdeletetest-1663" for this suite. Aug 21 00:22:17.981: INFO: Namespace nsdeletetest-1663 was already deleted STEP: Destroying namespace "nsdeletetest-4946" for this suite. • [SLOW TEST:6.383 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":46,"skipped":752,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:22:17.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 21 00:22:18.091: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed c9382fb6-ddc6-47ac-a543-e748d7115630 1974820 0 2020-08-21 00:22:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 00:22:18.092: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed c9382fb6-ddc6-47ac-a543-e748d7115630 1974821 0 2020-08-21 00:22:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 21 00:22:18.104: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed c9382fb6-ddc6-47ac-a543-e748d7115630 1974822 0 2020-08-21 00:22:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 00:22:18.105: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3376 /api/v1/namespaces/watch-3376/configmaps/e2e-watch-test-watch-closed c9382fb6-ddc6-47ac-a543-e748d7115630 1974823 0 2020-08-21 00:22:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:22:18.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3376" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":47,"skipped":773,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:22:18.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50 [It] should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Aug 21 00:22:22.249: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Aug 21 00:22:28.536: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:22:28.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2788" for this suite. • [SLOW TEST:10.447 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":48,"skipped":793,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:22:28.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:22:28.723: INFO: Creating deployment "webserver-deployment" Aug 21 00:22:28.730: INFO: Waiting for observed generation 1 Aug 21 00:22:30.790: INFO: Waiting for all required pods to come up Aug 21 00:22:30.799: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 21 00:22:42.816: INFO: Waiting for deployment "webserver-deployment" to complete Aug 21 00:22:42.827: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 21 00:22:42.841: INFO: Updating deployment webserver-deployment Aug 21 00:22:42.841: INFO: Waiting for observed generation 2 Aug 21 00:22:45.138: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 21 00:22:45.144: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 21 00:22:45.149: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 21 00:22:45.162: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 21 00:22:45.162: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 21 00:22:45.166: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 21 00:22:45.174: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 21 00:22:45.174: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 21 00:22:45.182: INFO: Updating deployment webserver-deployment Aug 21 00:22:45.183: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 21 00:22:46.219: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 21 00:22:47.405: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 21 00:22:50.439: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2446 /apis/apps/v1/namespaces/deployment-2446/deployments/webserver-deployment 106c0bb8-b790-40f2-835a-59190b75dc91 1975251 3 2020-08-21 00:22:28 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002d5fa08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 00:22:46 +0000 UTC,LastTransitionTime:2020-08-21 00:22:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-21 00:22:47 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 21 00:22:50.590: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-2446 /apis/apps/v1/namespaces/deployment-2446/replicasets/webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 1975248 3 2020-08-21 00:22:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 106c0bb8-b790-40f2-835a-59190b75dc91 0x40032dc407 0x40032dc408}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40032dc478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 00:22:50.590: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 21 00:22:50.591: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-2446 /apis/apps/v1/namespaces/deployment-2446/replicasets/webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 1975242 3 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 106c0bb8-b790-40f2-835a-59190b75dc91 0x40032dc347 0x40032dc348}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40032dc3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 21 00:22:50.765: INFO: Pod "webserver-deployment-595b5b9587-26nd9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-26nd9 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-26nd9 b0e22b9b-7ec8-4f1e-b037-5ba12664e0d1 1975244 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dc927 0x40032dc928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.766: INFO: Pod "webserver-deployment-595b5b9587-299kv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-299kv webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-299kv 92b9bc8e-e57b-4d0f-a8be-559881d68f57 1975274 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dca87 0x40032dca88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.768: INFO: Pod "webserver-deployment-595b5b9587-4wcbk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4wcbk webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-4wcbk ef8669b3-c203-4bb8-b3de-f3c7a1d8e3c3 1975042 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dcbe7 0x40032dcbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.215,StartTime:2020-08-21 00:22:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://93872f3641647b8d86b5ec30b1d7a4c3bfa9bbfa49461c543b797bc9ff8fa74a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.769: INFO: Pod "webserver-deployment-595b5b9587-5ps98" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ps98 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-5ps98 0eb4a5fc-410d-42e3-9eaf-061c17a3e967 1975229 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dcd67 0x40032dcd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.770: INFO: Pod "webserver-deployment-595b5b9587-6zh6k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6zh6k webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-6zh6k 6a45fbd3-8354-48e5-8d4f-4c022df5889b 1975033 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dce87 0x40032dce88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.216,StartTime:2020-08-21 00:22:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ab981092f292f68d48b60fbd2f7f7e53ff2a7d01f81fcd3181fc596d4a68169,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.772: INFO: Pod "webserver-deployment-595b5b9587-9rfnr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9rfnr webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-9rfnr 1d5a1fcc-2386-4830-b572-747ed964f5c2 1975217 0 2020-08-21 00:22:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd007 0x40032dd008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.773: INFO: Pod "webserver-deployment-595b5b9587-bfk42" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bfk42 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-bfk42 ebc848c6-c29e-4c02-8e46-223883994200 1975282 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd167 0x40032dd168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.774: INFO: Pod "webserver-deployment-595b5b9587-cjwwx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cjwwx webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-cjwwx 666b04a7-c633-46a3-a9cc-b75f9e136433 1974987 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd2c7 0x40032dd2c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.214,StartTime:2020-08-21 00:22:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e97707d276fec55de310f17422def844b77cada913788ad48e8ebd56225dbc72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.774: INFO: Pod "webserver-deployment-595b5b9587-d2p7s" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d2p7s webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-d2p7s bec5db4d-2ded-4fe3-be62-050464f5d393 1975232 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd447 0x40032dd448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.775: INFO: Pod "webserver-deployment-595b5b9587-f9t5k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f9t5k webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-f9t5k 4dbf73ca-c632-4bab-8ee0-276fb304321c 1975078 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd567 0x40032dd568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.218,StartTime:2020-08-21 00:22:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d8e06854ae5d5cd80bcc3828f6c99dbd570052ab2511bea54939f96a0a82457d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.776: INFO: Pod "webserver-deployment-595b5b9587-fm5z5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fm5z5 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-fm5z5 9555f065-862f-44f1-8b46-a18de7c1cbd4 1975285 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd6e7 0x40032dd6e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.777: INFO: Pod "webserver-deployment-595b5b9587-hzjk7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hzjk7 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-hzjk7 b9898095-39d8-4448-b9a7-add35101b0f7 1975258 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd847 0x40032dd848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.778: INFO: Pod "webserver-deployment-595b5b9587-l25mv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l25mv webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-l25mv 9750c7c3-2d4d-4780-9e91-f4207d358998 1975272 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032dd9a7 0x40032dd9a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.779: INFO: Pod "webserver-deployment-595b5b9587-mz5k7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mz5k7 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-mz5k7 cc1215a2-bd5b-41e5-b380-70de6c8bf178 1975230 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032ddb07 0x40032ddb08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.780: INFO: Pod "webserver-deployment-595b5b9587-nl9n7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nl9n7 webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-nl9n7 c321b344-4fce-433c-9c0d-cc7642fa5524 1975233 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032ddc27 0x40032ddc28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.781: INFO: Pod "webserver-deployment-595b5b9587-pm4dm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pm4dm webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-pm4dm ef61e542-9754-42eb-b23c-42071d6f8d9a 1975076 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032ddd47 0x40032ddd48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.197,StartTime:2020-08-21 00:22:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e5ecbcca06bf8bf9a2a23ceab80ed355901c0e81058535946e4b55604a3afbdd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.782: INFO: Pod "webserver-deployment-595b5b9587-qn79l" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qn79l webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-qn79l 32418e35-7fcb-4e0e-9b1c-179c6c81e8a3 1975026 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40032ddec7 0x40032ddec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.194,StartTime:2020-08-21 00:22:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aab854ccde3327e4cad964356265f85e7a7761a1fd27fa15798a800dd9bda1c3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.784: INFO: Pod "webserver-deployment-595b5b9587-qv5fl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qv5fl webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-qv5fl 722ad6e2-8060-43d4-967a-881b5d7b4ed1 1975057 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40033ae047 0x40033ae048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.195,StartTime:2020-08-21 00:22:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://09d156090b7663c76285a6e486fa0fd9dd9bc1331de9bcfa2589b978f841b746,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.785: INFO: Pod "webserver-deployment-595b5b9587-tglbb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tglbb webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-tglbb 0cdaceea-9d7f-48a7-833e-01320c7b2863 1975052 0 2020-08-21 00:22:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40033ae1c7 0x40033ae1c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.217,StartTime:2020-08-21 00:22:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 00:22:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://071868229ffadfc8b634047c529d01e6062791345e31d71f8d68d4b78245d4a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.786: INFO: Pod "webserver-deployment-595b5b9587-w8vxp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w8vxp webserver-deployment-595b5b9587- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-595b5b9587-w8vxp 38e34d85-0db4-47ff-afea-3e28e7a29965 1975257 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 56f04cd6-b885-4fb0-8f32-8b6bb0ad4342 0x40033ae347 0x40033ae348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.787: INFO: Pod "webserver-deployment-c7997dcc8-5x8jj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5x8jj webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-5x8jj 33bdf0db-c9ab-4dbc-8134-ee6e6ab60604 1975172 0 2020-08-21 00:22:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033ae4a7 0x40033ae4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.789: INFO: Pod "webserver-deployment-c7997dcc8-cdft9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cdft9 webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-cdft9 875e837e-b987-4b9a-94a2-08175c549371 1975226 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033ae627 0x40033ae628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.790: INFO: Pod "webserver-deployment-c7997dcc8-d5t7q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d5t7q webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-d5t7q 21edd9cc-c692-4f66-b666-43b6e158c5a3 1975227 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033ae757 0x40033ae758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.791: INFO: Pod "webserver-deployment-c7997dcc8-d72bb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d72bb webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-d72bb a51718c4-bda7-44d8-bcab-8d893a0f88e3 1975142 0 2020-08-21 00:22:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033ae887 0x40033ae888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.792: INFO: Pod "webserver-deployment-c7997dcc8-dfnlv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dfnlv webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-dfnlv f5101abc-16ea-4c7b-ad3b-7dbd7d97c4c8 1975139 0 2020-08-21 00:22:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033aea07 0x40033aea08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.793: INFO: Pod "webserver-deployment-c7997dcc8-hg695" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hg695 webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-hg695 717d0be6-6e6a-46a7-9331-d06e723508ae 1975155 0 2020-08-21 00:22:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033aeb87 0x40033aeb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.794: INFO: Pod "webserver-deployment-c7997dcc8-hw8xm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hw8xm webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-hw8xm d8b7eca4-45b5-491e-b67c-151ff48ce513 1975170 0 2020-08-21 00:22:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033aed07 0x40033aed08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.795: INFO: Pod "webserver-deployment-c7997dcc8-k76st" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k76st webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-k76st 656914d7-b762-4075-a4a3-6b9b2d99e3ef 1975245 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033aee87 0x40033aee88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.797: INFO: Pod "webserver-deployment-c7997dcc8-kv2h8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kv2h8 webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-kv2h8 65817336-ed71-40b2-9797-4236a25bef68 1975239 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033af007 0x40033af008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.799: INFO: Pod "webserver-deployment-c7997dcc8-mcv6c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mcv6c webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-mcv6c 0e157d23-ac92-4a88-a0c8-86295ac805b9 1975266 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033af137 0x40033af138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.800: INFO: Pod "webserver-deployment-c7997dcc8-npc9q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npc9q webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-npc9q 0735199b-7aa4-43f1-8f6c-26daf91b1856 1975223 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033af2b7 0x40033af2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.802: INFO: Pod "webserver-deployment-c7997dcc8-pwrm2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pwrm2 webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-pwrm2 f55ba5c6-e68c-4b1d-9784-6e7a563f3346 1975231 0 2020-08-21 00:22:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033af3e7 0x40033af3e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 00:22:50.804: INFO: Pod "webserver-deployment-c7997dcc8-r7fzk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r7fzk webserver-deployment-c7997dcc8- deployment-2446 /api/v1/namespaces/deployment-2446/pods/webserver-deployment-c7997dcc8-r7fzk 33d1f88e-4bfd-4803-aaa6-5a2952d3a4e4 1975264 0 2020-08-21 00:22:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5fd1bbce-0031-4510-a2dd-b1978899b279 0x40033af517 0x40033af518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2fz45,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2fz45,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2fz45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:22:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:22:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:22:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2446" for this suite. • [SLOW TEST:22.981 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":49,"skipped":804,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:22:51.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 21 00:22:53.743: INFO: Waiting up to 5m0s for pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140" in namespace "emptydir-2912" to be "success or failure" Aug 21 00:22:54.235: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 491.667888ms Aug 21 00:22:56.425: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681924412s Aug 21 00:22:58.882: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 5.13942071s Aug 21 00:23:01.001: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 7.25831579s Aug 21 00:23:03.496: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 9.753475586s Aug 21 00:23:05.581: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 11.838111141s Aug 21 00:23:07.737: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Pending", Reason="", readiness=false. Elapsed: 13.993764975s Aug 21 00:23:09.956: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Running", Reason="", readiness=true. Elapsed: 16.213149224s Aug 21 00:23:12.114: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Running", Reason="", readiness=true. Elapsed: 18.371333392s Aug 21 00:23:14.234: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.491147854s STEP: Saw pod success Aug 21 00:23:14.235: INFO: Pod "pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140" satisfied condition "success or failure" Aug 21 00:23:14.325: INFO: Trying to get logs from node jerma-worker pod pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140 container test-container: STEP: delete the pod Aug 21 00:23:15.547: INFO: Waiting for pod pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140 to disappear Aug 21 00:23:16.320: INFO: Pod pod-633ffe59-ec61-4f0a-b7eb-a5032eca1140 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:23:16.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2912" for this suite. • [SLOW TEST:26.109 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":824,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:23:17.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7801 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 00:23:19.051: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 21 00:23:55.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.214:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7801 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:23:55.497: INFO: >>> kubeConfig: /root/.kube/config I0821 00:23:55.612996 7 log.go:172] (0x4002d18a50) (0x400136e460) Create stream I0821 00:23:55.613631 7 log.go:172] (0x4002d18a50) (0x400136e460) Stream added, broadcasting: 1 I0821 00:23:55.634532 7 log.go:172] (0x4002d18a50) Reply frame received for 1 I0821 00:23:55.635354 7 log.go:172] (0x4002d18a50) (0x400136e640) Create stream I0821 00:23:55.635465 7 log.go:172] (0x4002d18a50) (0x400136e640) Stream added, broadcasting: 3 I0821 00:23:55.638436 7 log.go:172] (0x4002d18a50) Reply frame received for 3 I0821 00:23:55.639254 7 log.go:172] (0x4002d18a50) (0x40015aa1e0) Create stream I0821 00:23:55.639479 7 log.go:172] (0x4002d18a50) (0x40015aa1e0) Stream added, broadcasting: 5 I0821 00:23:55.641576 7 log.go:172] (0x4002d18a50) Reply frame received for 5 I0821 00:23:55.738535 7 log.go:172] (0x4002d18a50) Data frame received for 5 I0821 00:23:55.738724 7 log.go:172] (0x4002d18a50) Data frame received for 3 I0821 00:23:55.739030 7 log.go:172] (0x4002d18a50) Data frame received for 1 I0821 00:23:55.739189 7 log.go:172] (0x400136e460) (1) Data frame handling I0821 00:23:55.739257 7 log.go:172] (0x40015aa1e0) (5) Data frame handling I0821 00:23:55.739516 7 log.go:172] (0x400136e640) (3) Data frame handling I0821 00:23:55.740299 7 log.go:172] (0x400136e640) (3) Data frame sent I0821 00:23:55.740474 7 log.go:172] (0x400136e460) (1) Data frame sent I0821 00:23:55.740634 7 log.go:172] (0x4002d18a50) Data frame received for 3 I0821 00:23:55.740780 7 log.go:172] (0x400136e640) (3) Data frame handling I0821 00:23:55.742481 7 log.go:172] (0x4002d18a50) (0x400136e460) Stream removed, broadcasting: 1 I0821 00:23:55.743232 7 log.go:172] (0x4002d18a50) Go away received I0821 00:23:55.745471 7 log.go:172] (0x4002d18a50) (0x400136e460) Stream removed, broadcasting: 1 I0821 00:23:55.745787 7 log.go:172] (0x4002d18a50) (0x400136e640) Stream removed, broadcasting: 3 I0821 00:23:55.746025 7 log.go:172] (0x4002d18a50) (0x40015aa1e0) Stream removed, broadcasting: 5 Aug 21 00:23:55.746: INFO: Found all expected endpoints: [netserver-0] Aug 21 00:23:55.779: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.232:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7801 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:23:55.779: INFO: >>> kubeConfig: /root/.kube/config I0821 00:23:55.885780 7 log.go:172] (0x40031324d0) (0x40015aaaa0) Create stream I0821 00:23:55.886043 7 log.go:172] (0x40031324d0) (0x40015aaaa0) Stream added, broadcasting: 1 I0821 00:23:55.892074 7 log.go:172] (0x40031324d0) Reply frame received for 1 I0821 00:23:55.892280 7 log.go:172] (0x40031324d0) (0x4000edaa00) Create stream I0821 00:23:55.892393 7 log.go:172] (0x40031324d0) (0x4000edaa00) Stream added, broadcasting: 3 I0821 00:23:55.894238 7 log.go:172] (0x40031324d0) Reply frame received for 3 I0821 00:23:55.894481 7 log.go:172] (0x40031324d0) (0x400136e8c0) Create stream I0821 00:23:55.894598 7 log.go:172] (0x40031324d0) (0x400136e8c0) Stream added, broadcasting: 5 I0821 00:23:55.895985 7 log.go:172] (0x40031324d0) Reply frame received for 5 I0821 00:23:55.975713 7 log.go:172] (0x40031324d0) Data frame received for 3 I0821 00:23:55.976006 7 log.go:172] (0x4000edaa00) (3) Data frame handling I0821 00:23:55.976131 7 log.go:172] (0x4000edaa00) (3) Data frame sent I0821 00:23:55.976276 7 log.go:172] (0x40031324d0) Data frame received for 5 I0821 00:23:55.976526 7 log.go:172] (0x400136e8c0) (5) Data frame handling I0821 00:23:55.976909 7 log.go:172] (0x40031324d0) Data frame received for 3 I0821 00:23:55.977071 7 log.go:172] (0x40031324d0) Data frame received for 1 I0821 00:23:55.977234 7 log.go:172] (0x40015aaaa0) (1) Data frame handling I0821 00:23:55.977379 7 log.go:172] (0x40015aaaa0) (1) Data frame sent I0821 00:23:55.977545 7 log.go:172] (0x40031324d0) (0x40015aaaa0) Stream removed, broadcasting: 1 I0821 00:23:55.977706 7 log.go:172] (0x4000edaa00) (3) Data frame handling I0821 00:23:55.977960 7 log.go:172] (0x40031324d0) Go away received I0821 00:23:55.978159 7 log.go:172] (0x40031324d0) (0x40015aaaa0) Stream removed, broadcasting: 1 I0821 00:23:55.978325 7 log.go:172] (0x40031324d0) (0x4000edaa00) Stream removed, broadcasting: 3 I0821 00:23:55.978453 7 log.go:172] (0x40031324d0) (0x400136e8c0) Stream removed, broadcasting: 5 Aug 21 00:23:55.978: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:23:55.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7801" for this suite. • [SLOW TEST:38.337 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:23:56.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 00:24:01.154: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 00:24:03.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566241, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566241, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566241, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566241, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:24:06.237: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:24:06.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1583" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.544 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":52,"skipped":921,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:07.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0821 00:24:08.800437 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 00:24:08.802: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:08.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2295" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":53,"skipped":932,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:08.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:24:08.926: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 21 00:24:08.971: INFO: Number of nodes with available pods: 0 Aug 21 00:24:08.972: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 21 00:24:09.074: INFO: Number of nodes with available pods: 0 Aug 21 00:24:09.074: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:10.082: INFO: Number of nodes with available pods: 0 Aug 21 00:24:10.082: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:11.081: INFO: Number of nodes with available pods: 0 Aug 21 00:24:11.081: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:12.081: INFO: Number of nodes with available pods: 0 Aug 21 00:24:12.081: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:13.081: INFO: Number of nodes with available pods: 1 Aug 21 00:24:13.081: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 21 00:24:13.122: INFO: Number of nodes with available pods: 1 Aug 21 00:24:13.122: INFO: Number of running nodes: 0, number of available pods: 1 Aug 21 00:24:14.145: INFO: Number of nodes with available pods: 0 Aug 21 00:24:14.145: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 21 00:24:14.164: INFO: Number of nodes with available pods: 0 Aug 21 00:24:14.165: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:15.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:15.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:16.171: INFO: Number of nodes with available pods: 0 Aug 21 00:24:16.171: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:17.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:17.173: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:18.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:18.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:19.171: INFO: Number of nodes with available pods: 0 Aug 21 00:24:19.171: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:20.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:20.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:21.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:21.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:22.173: INFO: Number of nodes with available pods: 0 Aug 21 00:24:22.173: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:23.189: INFO: Number of nodes with available pods: 0 Aug 21 00:24:23.190: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:24.172: INFO: Number of nodes with available pods: 0 Aug 21 00:24:24.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 21 00:24:25.223: INFO: Number of nodes with available pods: 1 Aug 21 00:24:25.223: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2875, will wait for the garbage collector to delete the pods Aug 21 00:24:25.416: INFO: Deleting DaemonSet.extensions daemon-set took: 65.204804ms Aug 21 00:24:25.716: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.738061ms Aug 21 00:24:31.823: INFO: Number of nodes with available pods: 0 Aug 21 00:24:31.823: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 00:24:31.831: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2875/daemonsets","resourceVersion":"1976163"},"items":null} Aug 21 00:24:31.836: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2875/pods","resourceVersion":"1976163"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:31.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2875" for this suite. • [SLOW TEST:23.136 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":54,"skipped":950,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:31.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:24:35.492: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:24:37.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566275, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566275, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566275, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733566275, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:24:40.796: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:42.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-644" for this suite. STEP: Destroying namespace "webhook-644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.485 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":55,"skipped":959,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:42.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a02efa63-19da-4ca9-9148-a31ae75cd4da STEP: Creating a pod to test consume secrets Aug 21 00:24:42.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16" in namespace "projected-31" to be "success or failure" Aug 21 00:24:42.881: INFO: Pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16": Phase="Pending", Reason="", readiness=false. Elapsed: 50.645249ms Aug 21 00:24:44.899: INFO: Pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068787616s Aug 21 00:24:46.983: INFO: Pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15301477s Aug 21 00:24:49.019: INFO: Pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189015627s STEP: Saw pod success Aug 21 00:24:49.020: INFO: Pod "pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16" satisfied condition "success or failure" Aug 21 00:24:49.026: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16 container projected-secret-volume-test: STEP: delete the pod Aug 21 00:24:49.265: INFO: Waiting for pod pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16 to disappear Aug 21 00:24:49.397: INFO: Pod pod-projected-secrets-3797e242-3fd9-42ec-a23f-6a9ef4488e16 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:49.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-31" for this suite. • [SLOW TEST:6.972 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":979,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:49.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Aug 21 00:24:49.654: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1155" to be "success or failure" Aug 21 00:24:49.705: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.673204ms Aug 21 00:24:51.713: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058237409s Aug 21 00:24:53.719: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064218182s Aug 21 00:24:55.741: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086837058s STEP: Saw pod success Aug 21 00:24:55.741: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 21 00:24:55.804: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 21 00:24:55.861: INFO: Waiting for pod pod-host-path-test to disappear Aug 21 00:24:55.874: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:24:55.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1155" for this suite. • [SLOW TEST:6.473 seconds] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":997,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:24:55.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 21 00:24:56.041: INFO: >>> kubeConfig: /root/.kube/config Aug 21 00:25:15.817: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:24.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1033" for this suite. • [SLOW TEST:88.310 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":58,"skipped":999,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:24.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-b7aa1666-295b-41c9-bdf2-ab72c26ad8d0 STEP: Creating a pod to test consume configMaps Aug 21 00:26:24.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff" in namespace "projected-641" to be "success or failure" Aug 21 00:26:24.312: INFO: Pod "pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.864135ms Aug 21 00:26:26.337: INFO: Pod "pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034140413s Aug 21 00:26:28.344: INFO: Pod "pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040877638s STEP: Saw pod success Aug 21 00:26:28.344: INFO: Pod "pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff" satisfied condition "success or failure" Aug 21 00:26:28.348: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff container projected-configmap-volume-test: STEP: delete the pod Aug 21 00:26:28.379: INFO: Waiting for pod pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff to disappear Aug 21 00:26:28.395: INFO: Pod pod-projected-configmaps-610dea32-66af-42e0-a808-5470fa4d11ff no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-641" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1000,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:28.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:33.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2725" for this suite. • [SLOW TEST:5.469 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":60,"skipped":1009,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:33.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:26:33.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9" in namespace "projected-3648" to be "success or failure" Aug 21 00:26:33.982: INFO: Pod "downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.724737ms Aug 21 00:26:35.989: INFO: Pod "downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043777705s Aug 21 00:26:37.996: INFO: Pod "downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050801459s STEP: Saw pod success Aug 21 00:26:37.996: INFO: Pod "downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9" satisfied condition "success or failure" Aug 21 00:26:38.001: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9 container client-container: STEP: delete the pod Aug 21 00:26:38.097: INFO: Waiting for pod downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9 to disappear Aug 21 00:26:38.102: INFO: Pod downwardapi-volume-5a3b5cc7-4b07-44cd-87c7-d2d0b52feda9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:38.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3648" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1016,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:38.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-7a9419b6-7791-4899-961e-af2eea0b913a STEP: Creating a pod to test consume configMaps Aug 21 00:26:38.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323" in namespace "projected-320" to be "success or failure" Aug 21 00:26:38.229: INFO: Pod "pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660605ms Aug 21 00:26:40.235: INFO: Pod "pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016512878s Aug 21 00:26:42.243: INFO: Pod "pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024014942s STEP: Saw pod success Aug 21 00:26:42.243: INFO: Pod "pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323" satisfied condition "success or failure" Aug 21 00:26:42.248: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323 container projected-configmap-volume-test: STEP: delete the pod Aug 21 00:26:42.345: INFO: Waiting for pod pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323 to disappear Aug 21 00:26:42.384: INFO: Pod pod-projected-configmaps-e03c6d80-a590-4e81-9470-a2bb6fd58323 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:42.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-320" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:42.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:26:42.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9" in namespace "downward-api-9321" to be "success or failure" Aug 21 00:26:42.609: INFO: Pod "downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.175862ms Aug 21 00:26:44.615: INFO: Pod "downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045870662s Aug 21 00:26:46.630: INFO: Pod "downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060870303s STEP: Saw pod success Aug 21 00:26:46.631: INFO: Pod "downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9" satisfied condition "success or failure" Aug 21 00:26:46.634: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9 container client-container: STEP: delete the pod Aug 21 00:26:46.656: INFO: Waiting for pod downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9 to disappear Aug 21 00:26:46.660: INFO: Pod downwardapi-volume-f7506a64-0896-4e66-a5a2-61dbebefb0c9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:26:46.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9321" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1071,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:26:46.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-b3e3ade9-6e79-4d44-a177-4ad865c3df71 in namespace container-probe-9625 Aug 21 00:26:51.063: INFO: Started pod test-webserver-b3e3ade9-6e79-4d44-a177-4ad865c3df71 in namespace container-probe-9625 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 00:26:51.067: INFO: Initial restart count of pod test-webserver-b3e3ade9-6e79-4d44-a177-4ad865c3df71 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:30:52.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9625" for this suite. • [SLOW TEST:245.431 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:30:52.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d6103d23-ae4f-48fe-a1ca-726242dcfb53 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:30:58.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9069" for this suite. • [SLOW TEST:6.762 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1092,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:30:58.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-557c700d-fb34-46ff-9ca9-fe8e5632bfda in namespace container-probe-5606 Aug 21 00:31:05.430: INFO: Started pod busybox-557c700d-fb34-46ff-9ca9-fe8e5632bfda in namespace container-probe-5606 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 00:31:05.434: INFO: Initial restart count of pod busybox-557c700d-fb34-46ff-9ca9-fe8e5632bfda is 0 Aug 21 00:31:59.994: INFO: Restart count of pod container-probe-5606/busybox-557c700d-fb34-46ff-9ca9-fe8e5632bfda is now 1 (54.559905814s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:32:00.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5606" for this suite. • [SLOW TEST:61.198 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1110,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:32:00.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7237 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7237 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7237 Aug 21 00:32:00.248: INFO: Found 0 stateful pods, waiting for 1 Aug 21 00:32:10.257: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 21 00:32:10.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 00:32:15.180: INFO: stderr: "I0821 00:32:15.029879 445 log.go:172] (0x40001142c0) (0x400045f400) Create stream\nI0821 00:32:15.032698 445 log.go:172] (0x40001142c0) (0x400045f400) Stream added, broadcasting: 1\nI0821 00:32:15.045253 445 log.go:172] (0x40001142c0) Reply frame received for 1\nI0821 00:32:15.046221 445 log.go:172] (0x40001142c0) (0x4000698640) Create stream\nI0821 00:32:15.046310 445 log.go:172] (0x40001142c0) (0x4000698640) Stream added, broadcasting: 3\nI0821 00:32:15.048211 445 log.go:172] (0x40001142c0) Reply frame received for 3\nI0821 00:32:15.048697 445 log.go:172] (0x40001142c0) (0x4000811a40) Create stream\nI0821 00:32:15.048864 445 log.go:172] (0x40001142c0) (0x4000811a40) Stream added, broadcasting: 5\nI0821 00:32:15.050157 445 log.go:172] (0x40001142c0) Reply frame received for 5\nI0821 00:32:15.113210 445 log.go:172] (0x40001142c0) Data frame received for 5\nI0821 00:32:15.113504 445 log.go:172] (0x4000811a40) (5) Data frame handling\nI0821 00:32:15.114170 445 log.go:172] (0x4000811a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:32:15.156322 445 log.go:172] (0x40001142c0) Data frame received for 5\nI0821 00:32:15.156508 445 log.go:172] (0x4000811a40) (5) Data frame handling\nI0821 00:32:15.156686 445 log.go:172] (0x40001142c0) Data frame received for 3\nI0821 00:32:15.156839 445 log.go:172] (0x4000698640) (3) Data frame handling\nI0821 00:32:15.156951 445 log.go:172] (0x4000698640) (3) Data frame sent\nI0821 00:32:15.157069 445 log.go:172] (0x40001142c0) Data frame received for 3\nI0821 00:32:15.157187 445 log.go:172] (0x4000698640) (3) Data frame handling\nI0821 00:32:15.159607 445 log.go:172] (0x40001142c0) Data frame received for 1\nI0821 00:32:15.159714 445 log.go:172] (0x400045f400) (1) Data frame handling\nI0821 00:32:15.159823 445 log.go:172] (0x400045f400) (1) Data frame sent\nI0821 00:32:15.160828 445 log.go:172] (0x40001142c0) (0x400045f400) Stream removed, broadcasting: 1\nI0821 00:32:15.163407 445 log.go:172] (0x40001142c0) Go away received\nI0821 00:32:15.166832 445 log.go:172] (0x40001142c0) (0x400045f400) Stream removed, broadcasting: 1\nI0821 00:32:15.167189 445 log.go:172] (0x40001142c0) (0x4000698640) Stream removed, broadcasting: 3\nI0821 00:32:15.167403 445 log.go:172] (0x40001142c0) (0x4000811a40) Stream removed, broadcasting: 5\n" Aug 21 00:32:15.181: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 00:32:15.181: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 00:32:15.187: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 21 00:32:25.207: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 00:32:25.207: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 00:32:25.226: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999976027s Aug 21 00:32:26.235: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99048596s Aug 21 00:32:27.241: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982215603s Aug 21 00:32:28.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975918318s Aug 21 00:32:29.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.968830667s Aug 21 00:32:30.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.959415472s Aug 21 00:32:31.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951834832s Aug 21 00:32:32.279: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.94471017s Aug 21 00:32:33.306: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.938085794s Aug 21 00:32:34.314: INFO: Verifying statefulset ss doesn't scale past 1 for another 910.869325ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7237 Aug 21 00:32:35.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:32:36.826: INFO: stderr: "I0821 00:32:36.693057 476 log.go:172] (0x4000a7e000) (0x400095c000) Create stream\nI0821 00:32:36.699704 476 log.go:172] (0x4000a7e000) (0x400095c000) Stream added, broadcasting: 1\nI0821 00:32:36.712432 476 log.go:172] (0x4000a7e000) Reply frame received for 1\nI0821 00:32:36.713607 476 log.go:172] (0x4000a7e000) (0x400084e000) Create stream\nI0821 00:32:36.713719 476 log.go:172] (0x4000a7e000) (0x400084e000) Stream added, broadcasting: 3\nI0821 00:32:36.716078 476 log.go:172] (0x4000a7e000) Reply frame received for 3\nI0821 00:32:36.716555 476 log.go:172] (0x4000a7e000) (0x400084e0a0) Create stream\nI0821 00:32:36.716659 476 log.go:172] (0x4000a7e000) (0x400084e0a0) Stream added, broadcasting: 5\nI0821 00:32:36.718584 476 log.go:172] (0x4000a7e000) Reply frame received for 5\nI0821 00:32:36.805446 476 log.go:172] (0x4000a7e000) Data frame received for 5\nI0821 00:32:36.806212 476 log.go:172] (0x4000a7e000) Data frame received for 3\nI0821 00:32:36.806379 476 log.go:172] (0x400084e000) (3) Data frame handling\nI0821 00:32:36.806552 476 log.go:172] (0x4000a7e000) Data frame received for 1\nI0821 00:32:36.806651 476 log.go:172] (0x400095c000) (1) Data frame handling\nI0821 00:32:36.806741 476 log.go:172] (0x400084e0a0) (5) Data frame handling\nI0821 00:32:36.807349 476 log.go:172] (0x400084e0a0) (5) Data frame sent\nI0821 00:32:36.807438 476 log.go:172] (0x400095c000) (1) Data frame sent\nI0821 00:32:36.807541 476 log.go:172] (0x400084e000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:32:36.808699 476 log.go:172] (0x4000a7e000) Data frame received for 3\nI0821 00:32:36.810295 476 log.go:172] (0x4000a7e000) Data frame received for 5\nI0821 00:32:36.811164 476 log.go:172] (0x4000a7e000) (0x400095c000) Stream removed, broadcasting: 1\nI0821 00:32:36.811611 476 log.go:172] (0x400084e0a0) (5) Data frame handling\nI0821 00:32:36.811842 476 log.go:172] (0x400084e000) (3) Data frame handling\nI0821 00:32:36.812801 476 log.go:172] (0x4000a7e000) Go away received\nI0821 00:32:36.814952 476 log.go:172] (0x4000a7e000) (0x400095c000) Stream removed, broadcasting: 1\nI0821 00:32:36.815171 476 log.go:172] (0x4000a7e000) (0x400084e000) Stream removed, broadcasting: 3\nI0821 00:32:36.815336 476 log.go:172] (0x4000a7e000) (0x400084e0a0) Stream removed, broadcasting: 5\n" Aug 21 00:32:36.827: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 00:32:36.827: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 00:32:36.892: INFO: Found 1 stateful pods, waiting for 3 Aug 21 00:32:46.901: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 00:32:46.901: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 00:32:46.901: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 21 00:32:46.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 00:32:48.390: INFO: stderr: "I0821 00:32:48.254504 501 log.go:172] (0x40009d0d10) (0x40006cff40) Create stream\nI0821 00:32:48.257287 501 log.go:172] (0x40009d0d10) (0x40006cff40) Stream added, broadcasting: 1\nI0821 00:32:48.270099 501 log.go:172] (0x40009d0d10) Reply frame received for 1\nI0821 00:32:48.270807 501 log.go:172] (0x40009d0d10) (0x40004d9540) Create stream\nI0821 00:32:48.270876 501 log.go:172] (0x40009d0d10) (0x40004d9540) Stream added, broadcasting: 3\nI0821 00:32:48.272389 501 log.go:172] (0x40009d0d10) Reply frame received for 3\nI0821 00:32:48.272631 501 log.go:172] (0x40009d0d10) (0x40007d8000) Create stream\nI0821 00:32:48.272683 501 log.go:172] (0x40009d0d10) (0x40007d8000) Stream added, broadcasting: 5\nI0821 00:32:48.273950 501 log.go:172] (0x40009d0d10) Reply frame received for 5\nI0821 00:32:48.362892 501 log.go:172] (0x40009d0d10) Data frame received for 5\nI0821 00:32:48.363220 501 log.go:172] (0x40009d0d10) Data frame received for 3\nI0821 00:32:48.363371 501 log.go:172] (0x40004d9540) (3) Data frame handling\nI0821 00:32:48.365823 501 log.go:172] (0x40009d0d10) Data frame received for 1\nI0821 00:32:48.367780 501 log.go:172] (0x40006cff40) (1) Data frame handling\nI0821 00:32:48.368976 501 log.go:172] (0x40007d8000) (5) Data frame handling\nI0821 00:32:48.371705 501 log.go:172] (0x40007d8000) (5) Data frame sent\nI0821 00:32:48.372039 501 log.go:172] (0x40006cff40) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:32:48.376070 501 log.go:172] (0x40009d0d10) (0x40006cff40) Stream removed, broadcasting: 1\nI0821 00:32:48.376280 501 log.go:172] (0x40009d0d10) Data frame received for 5\nI0821 00:32:48.376386 501 log.go:172] (0x40007d8000) (5) Data frame handling\nI0821 00:32:48.376484 501 log.go:172] (0x40004d9540) (3) Data frame sent\nI0821 00:32:48.376599 501 log.go:172] (0x40009d0d10) Data frame received for 3\nI0821 00:32:48.376685 501 log.go:172] (0x40004d9540) (3) Data frame handling\nI0821 00:32:48.377765 501 log.go:172] (0x40009d0d10) Go away received\nI0821 00:32:48.381620 501 log.go:172] (0x40009d0d10) (0x40006cff40) Stream removed, broadcasting: 1\nI0821 00:32:48.381952 501 log.go:172] (0x40009d0d10) (0x40004d9540) Stream removed, broadcasting: 3\nI0821 00:32:48.382173 501 log.go:172] (0x40009d0d10) (0x40007d8000) Stream removed, broadcasting: 5\n" Aug 21 00:32:48.391: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 00:32:48.391: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 00:32:48.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 00:32:49.942: INFO: stderr: "I0821 00:32:49.715759 525 log.go:172] (0x40001102c0) (0x40007cd9a0) Create stream\nI0821 00:32:49.720847 525 log.go:172] (0x40001102c0) (0x40007cd9a0) Stream added, broadcasting: 1\nI0821 00:32:49.733608 525 log.go:172] (0x40001102c0) Reply frame received for 1\nI0821 00:32:49.734382 525 log.go:172] (0x40001102c0) (0x4000966000) Create stream\nI0821 00:32:49.734456 525 log.go:172] (0x40001102c0) (0x4000966000) Stream added, broadcasting: 3\nI0821 00:32:49.736235 525 log.go:172] (0x40001102c0) Reply frame received for 3\nI0821 00:32:49.736863 525 log.go:172] (0x40001102c0) (0x40007cdb80) Create stream\nI0821 00:32:49.736992 525 log.go:172] (0x40001102c0) (0x40007cdb80) Stream added, broadcasting: 5\nI0821 00:32:49.738717 525 log.go:172] (0x40001102c0) Reply frame received for 5\nI0821 00:32:49.806264 525 log.go:172] (0x40001102c0) Data frame received for 5\nI0821 00:32:49.806678 525 log.go:172] (0x40007cdb80) (5) Data frame handling\nI0821 00:32:49.807314 525 log.go:172] (0x40007cdb80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:32:49.922450 525 log.go:172] (0x40001102c0) Data frame received for 3\nI0821 00:32:49.922632 525 log.go:172] (0x4000966000) (3) Data frame handling\nI0821 00:32:49.922760 525 log.go:172] (0x40001102c0) Data frame received for 5\nI0821 00:32:49.922898 525 log.go:172] (0x40007cdb80) (5) Data frame handling\nI0821 00:32:49.923232 525 log.go:172] (0x4000966000) (3) Data frame sent\nI0821 00:32:49.923459 525 log.go:172] (0x40001102c0) Data frame received for 3\nI0821 00:32:49.923628 525 log.go:172] (0x4000966000) (3) Data frame handling\nI0821 00:32:49.923769 525 log.go:172] (0x40001102c0) Data frame received for 1\nI0821 00:32:49.923880 525 log.go:172] (0x40007cd9a0) (1) Data frame handling\nI0821 00:32:49.924018 525 log.go:172] (0x40007cd9a0) (1) Data frame sent\nI0821 00:32:49.925788 525 log.go:172] (0x40001102c0) (0x40007cd9a0) Stream removed, broadcasting: 1\nI0821 00:32:49.929566 525 log.go:172] (0x40001102c0) Go away received\nI0821 00:32:49.932472 525 log.go:172] (0x40001102c0) (0x40007cd9a0) Stream removed, broadcasting: 1\nI0821 00:32:49.933077 525 log.go:172] (0x40001102c0) (0x4000966000) Stream removed, broadcasting: 3\nI0821 00:32:49.933681 525 log.go:172] (0x40001102c0) (0x40007cdb80) Stream removed, broadcasting: 5\n" Aug 21 00:32:49.944: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 00:32:49.944: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 00:32:49.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 00:32:51.479: INFO: stderr: "I0821 00:32:51.319510 549 log.go:172] (0x400011a2c0) (0x4000702000) Create stream\nI0821 00:32:51.323983 549 log.go:172] (0x400011a2c0) (0x4000702000) Stream added, broadcasting: 1\nI0821 00:32:51.338637 549 log.go:172] (0x400011a2c0) Reply frame received for 1\nI0821 00:32:51.339222 549 log.go:172] (0x400011a2c0) (0x400075a000) Create stream\nI0821 00:32:51.339297 549 log.go:172] (0x400011a2c0) (0x400075a000) Stream added, broadcasting: 3\nI0821 00:32:51.342067 549 log.go:172] (0x400011a2c0) Reply frame received for 3\nI0821 00:32:51.342855 549 log.go:172] (0x400011a2c0) (0x400075a0a0) Create stream\nI0821 00:32:51.343029 549 log.go:172] (0x400011a2c0) (0x400075a0a0) Stream added, broadcasting: 5\nI0821 00:32:51.345496 549 log.go:172] (0x400011a2c0) Reply frame received for 5\nI0821 00:32:51.421966 549 log.go:172] (0x400011a2c0) Data frame received for 5\nI0821 00:32:51.422163 549 log.go:172] (0x400075a0a0) (5) Data frame handling\nI0821 00:32:51.422554 549 log.go:172] (0x400075a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:32:51.458962 549 log.go:172] (0x400011a2c0) Data frame received for 3\nI0821 00:32:51.459195 549 log.go:172] (0x400075a000) (3) Data frame handling\nI0821 00:32:51.459330 549 log.go:172] (0x400075a000) (3) Data frame sent\nI0821 00:32:51.459460 549 log.go:172] (0x400011a2c0) Data frame received for 3\nI0821 00:32:51.459666 549 log.go:172] (0x400011a2c0) Data frame received for 5\nI0821 00:32:51.459827 549 log.go:172] (0x400075a0a0) (5) Data frame handling\nI0821 00:32:51.460051 549 log.go:172] (0x400075a000) (3) Data frame handling\nI0821 00:32:51.460244 549 log.go:172] (0x400011a2c0) Data frame received for 1\nI0821 00:32:51.460343 549 log.go:172] (0x4000702000) (1) Data frame handling\nI0821 00:32:51.460436 549 log.go:172] (0x4000702000) (1) Data frame sent\nI0821 00:32:51.462651 549 log.go:172] (0x400011a2c0) (0x4000702000) Stream removed, broadcasting: 1\nI0821 00:32:51.464674 549 log.go:172] (0x400011a2c0) Go away received\nI0821 00:32:51.467938 549 log.go:172] (0x400011a2c0) (0x4000702000) Stream removed, broadcasting: 1\nI0821 00:32:51.468239 549 log.go:172] (0x400011a2c0) (0x400075a000) Stream removed, broadcasting: 3\nI0821 00:32:51.468466 549 log.go:172] (0x400011a2c0) (0x400075a0a0) Stream removed, broadcasting: 5\n" Aug 21 00:32:51.480: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 00:32:51.480: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 00:32:51.480: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 00:32:51.485: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 21 00:33:01.500: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 00:33:01.500: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 21 00:33:01.500: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 21 00:33:01.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999996504s Aug 21 00:33:02.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99303665s Aug 21 00:33:03.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972701034s Aug 21 00:33:04.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965035512s Aug 21 00:33:05.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.831738357s Aug 21 00:33:06.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.78985723s Aug 21 00:33:07.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.632741522s Aug 21 00:33:08.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.624178421s Aug 21 00:33:09.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.615202005s Aug 21 00:33:10.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 604.722912ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7237 Aug 21 00:33:11.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:13.363: INFO: stderr: "I0821 00:33:13.270869 572 log.go:172] (0x40007e4000) (0x4000509680) Create stream\nI0821 00:33:13.274460 572 log.go:172] (0x40007e4000) (0x4000509680) Stream added, broadcasting: 1\nI0821 00:33:13.287136 572 log.go:172] (0x40007e4000) Reply frame received for 1\nI0821 00:33:13.287696 572 log.go:172] (0x40007e4000) (0x4000a44000) Create stream\nI0821 00:33:13.287769 572 log.go:172] (0x40007e4000) (0x4000a44000) Stream added, broadcasting: 3\nI0821 00:33:13.289887 572 log.go:172] (0x40007e4000) Reply frame received for 3\nI0821 00:33:13.290424 572 log.go:172] (0x40007e4000) (0x40007028c0) Create stream\nI0821 00:33:13.290529 572 log.go:172] (0x40007e4000) (0x40007028c0) Stream added, broadcasting: 5\nI0821 00:33:13.292122 572 log.go:172] (0x40007e4000) Reply frame received for 5\nI0821 00:33:13.341476 572 log.go:172] (0x40007e4000) Data frame received for 5\nI0821 00:33:13.341840 572 log.go:172] (0x40007028c0) (5) Data frame handling\nI0821 00:33:13.342588 572 log.go:172] (0x40007028c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:33:13.343305 572 log.go:172] (0x40007e4000) Data frame received for 1\nI0821 00:33:13.343444 572 log.go:172] (0x4000509680) (1) Data frame handling\nI0821 00:33:13.343556 572 log.go:172] (0x4000509680) (1) Data frame sent\nI0821 00:33:13.343660 572 log.go:172] (0x40007e4000) Data frame received for 3\nI0821 00:33:13.343772 572 log.go:172] (0x4000a44000) (3) Data frame handling\nI0821 00:33:13.343890 572 log.go:172] (0x40007e4000) Data frame received for 5\nI0821 00:33:13.344034 572 log.go:172] (0x40007028c0) (5) Data frame handling\nI0821 00:33:13.344146 572 log.go:172] (0x4000a44000) (3) Data frame sent\nI0821 00:33:13.344274 572 log.go:172] (0x40007e4000) Data frame received for 3\nI0821 00:33:13.344367 572 log.go:172] (0x4000a44000) (3) Data frame handling\nI0821 00:33:13.345680 572 log.go:172] (0x40007e4000) (0x4000509680) Stream removed, broadcasting: 1\nI0821 00:33:13.348830 572 log.go:172] (0x40007e4000) Go away received\nI0821 00:33:13.351187 572 log.go:172] (0x40007e4000) (0x4000509680) Stream removed, broadcasting: 1\nI0821 00:33:13.351645 572 log.go:172] (0x40007e4000) (0x4000a44000) Stream removed, broadcasting: 3\nI0821 00:33:13.352566 572 log.go:172] (0x40007e4000) (0x40007028c0) Stream removed, broadcasting: 5\n" Aug 21 00:33:13.365: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 00:33:13.365: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 00:33:13.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:14.829: INFO: stderr: "I0821 00:33:14.710685 595 log.go:172] (0x40007c4b00) (0x400070a0a0) Create stream\nI0821 00:33:14.718928 595 log.go:172] (0x40007c4b00) (0x400070a0a0) Stream added, broadcasting: 1\nI0821 00:33:14.730884 595 log.go:172] (0x40007c4b00) Reply frame received for 1\nI0821 00:33:14.731486 595 log.go:172] (0x40007c4b00) (0x40006ebc20) Create stream\nI0821 00:33:14.731569 595 log.go:172] (0x40007c4b00) (0x40006ebc20) Stream added, broadcasting: 3\nI0821 00:33:14.733164 595 log.go:172] (0x40007c4b00) Reply frame received for 3\nI0821 00:33:14.733406 595 log.go:172] (0x40007c4b00) (0x40007ae000) Create stream\nI0821 00:33:14.733464 595 log.go:172] (0x40007c4b00) (0x40007ae000) Stream added, broadcasting: 5\nI0821 00:33:14.734699 595 log.go:172] (0x40007c4b00) Reply frame received for 5\nI0821 00:33:14.807154 595 log.go:172] (0x40007c4b00) Data frame received for 5\nI0821 00:33:14.807515 595 log.go:172] (0x40007c4b00) Data frame received for 1\nI0821 00:33:14.807684 595 log.go:172] (0x400070a0a0) (1) Data frame handling\nI0821 00:33:14.807895 595 log.go:172] (0x40007c4b00) Data frame received for 3\nI0821 00:33:14.808069 595 log.go:172] (0x40006ebc20) (3) Data frame handling\nI0821 00:33:14.808256 595 log.go:172] (0x40007ae000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:33:14.811016 595 log.go:172] (0x40006ebc20) (3) Data frame sent\nI0821 00:33:14.811269 595 log.go:172] (0x40007ae000) (5) Data frame sent\nI0821 00:33:14.812125 595 log.go:172] (0x40007c4b00) Data frame received for 5\nI0821 00:33:14.812244 595 log.go:172] (0x40007c4b00) Data frame received for 3\nI0821 00:33:14.812369 595 log.go:172] (0x40006ebc20) (3) Data frame handling\nI0821 00:33:14.812663 595 log.go:172] (0x40007ae000) (5) Data frame handling\nI0821 00:33:14.813058 595 log.go:172] (0x400070a0a0) (1) Data frame sent\nI0821 00:33:14.813932 595 log.go:172] (0x40007c4b00) (0x400070a0a0) Stream removed, broadcasting: 1\nI0821 00:33:14.814646 595 log.go:172] (0x40007c4b00) Go away received\nI0821 00:33:14.818472 595 log.go:172] (0x40007c4b00) (0x400070a0a0) Stream removed, broadcasting: 1\nI0821 00:33:14.818936 595 log.go:172] (0x40007c4b00) (0x40006ebc20) Stream removed, broadcasting: 3\nI0821 00:33:14.819232 595 log.go:172] (0x40007c4b00) (0x40007ae000) Stream removed, broadcasting: 5\n" Aug 21 00:33:14.830: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 00:33:14.830: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 00:33:14.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:16.283: INFO: rc: 1 Aug 21 00:33:16.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 21 00:33:26.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:27.552: INFO: rc: 1 Aug 21 00:33:27.552: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:33:37.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:38.805: INFO: rc: 1 Aug 21 00:33:38.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:33:48.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:33:50.084: INFO: rc: 1 Aug 21 00:33:50.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:00.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:01.318: INFO: rc: 1 Aug 21 00:34:01.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:12.568: INFO: rc: 1 Aug 21 00:34:12.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:22.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:23.818: INFO: rc: 1 Aug 21 00:34:23.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:33.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:35.067: INFO: rc: 1 Aug 21 00:34:35.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:45.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:46.304: INFO: rc: 1 Aug 21 00:34:46.305: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:34:56.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:34:57.526: INFO: rc: 1 Aug 21 00:34:57.526: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:35:07.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:35:08.747: INFO: rc: 1 Aug 21 00:35:08.747: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:35:18.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:35:19.967: INFO: rc: 1 Aug 21 00:35:19.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:35:29.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:35:31.511: INFO: rc: 1 Aug 21 00:35:31.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:35:41.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:35:42.755: INFO: rc: 1 Aug 21 00:35:42.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:35:52.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:35:54.039: INFO: rc: 1 Aug 21 00:35:54.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:36:04.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:36:05.572: INFO: rc: 1 Aug 21 00:36:05.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:36:15.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:36:16.835: INFO: rc: 1 Aug 21 00:36:16.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:36:26.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:36:28.075: INFO: rc: 1 Aug 21 00:36:28.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:36:38.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:36:39.295: INFO: rc: 1 Aug 21 00:36:39.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:36:49.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:36:50.590: INFO: rc: 1 Aug 21 00:36:50.590: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:00.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:01.848: INFO: rc: 1 Aug 21 00:37:01.848: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:11.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:13.109: INFO: rc: 1 Aug 21 00:37:13.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:23.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:24.341: INFO: rc: 1 Aug 21 00:37:24.341: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:34.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:35.593: INFO: rc: 1 Aug 21 00:37:35.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:45.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:46.853: INFO: rc: 1 Aug 21 00:37:46.853: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:37:56.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:37:58.135: INFO: rc: 1 Aug 21 00:37:58.135: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:38:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:38:09.405: INFO: rc: 1 Aug 21 00:38:09.405: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 21 00:38:19.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 00:38:20.655: INFO: rc: 1 Aug 21 00:38:20.656: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Aug 21 00:38:20.656: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 21 00:38:20.679: INFO: Deleting all statefulset in ns statefulset-7237 Aug 21 00:38:20.685: INFO: Scaling statefulset ss to 0 Aug 21 00:38:20.696: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 00:38:20.699: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:20.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7237" for this suite. • [SLOW TEST:380.724 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":67,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:20.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:38:20.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5948' Aug 21 00:38:22.521: INFO: stderr: "" Aug 21 00:38:22.521: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 21 00:38:22.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5948' Aug 21 00:38:24.442: INFO: stderr: "" Aug 21 00:38:24.442: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 21 00:38:25.451: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:38:25.452: INFO: Found 0 / 1 Aug 21 00:38:26.505: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:38:26.506: INFO: Found 1 / 1 Aug 21 00:38:26.506: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 21 00:38:26.511: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 00:38:26.511: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 00:38:26.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-584bz --namespace=kubectl-5948' Aug 21 00:38:27.866: INFO: stderr: "" Aug 21 00:38:27.866: INFO: stdout: "Name: agnhost-master-584bz\nNamespace: kubectl-5948\nPriority: 0\nNode: jerma-worker/172.18.0.6\nStart Time: Fri, 21 Aug 2020 00:38:22 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.231\nIPs:\n IP: 10.244.2.231\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://469ad293307df4ddd47cb3c3a90fdfbf9ff94091dfda1b0663af083b22b1b3d6\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 21 Aug 2020 00:38:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lnbgf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lnbgf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lnbgf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-5948/agnhost-master-584bz to jerma-worker\n Normal Pulled 4s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" Aug 21 00:38:27.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5948' Aug 21 00:38:29.266: INFO: stderr: "" Aug 21 00:38:29.266: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5948\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-584bz\n" Aug 21 00:38:29.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5948' Aug 21 00:38:30.654: INFO: stderr: "" Aug 21 00:38:30.654: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5948\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.7.219\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.231:6379\nSession Affinity: None\nEvents: \n" Aug 21 00:38:30.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Aug 21 00:38:32.060: INFO: stderr: "" Aug 21 00:38:32.060: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:37:06 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 21 Aug 2020 00:38:28 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 21 Aug 2020 00:37:08 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 21 Aug 2020 00:37:08 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 21 Aug 2020 00:37:08 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 21 Aug 2020 00:37:08 +0000 Sat, 15 Aug 2020 09:37:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.10\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e52c45bc589d48d995e8fd79ad5bf250\n System UUID: b981bdc7-d264-48ef-ab5e-3308e23aaf13\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.17.5\n Kube-Proxy Version: v1.17.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-bvrm4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d15h\n kube-system coredns-6955765f44-db8rh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d15h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d15h\n kube-system kindnet-j88mt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d15h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d15h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d15h\n kube-system kube-proxy-hmb6l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d15h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d15h\n local-path-storage local-path-provisioner-58f6947c7-p2cqw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d15h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 21 00:38:32.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5948' Aug 21 00:38:33.351: INFO: stderr: "" Aug 21 00:38:33.351: INFO: stdout: "Name: kubectl-5948\nLabels: e2e-framework=kubectl\n e2e-run=1e49d60e-6a90-4523-993a-99c952e0eed9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:33.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5948" for this suite. • [SLOW TEST:12.568 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":68,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:33.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0821 00:38:46.119746 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 00:38:46.119: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:46.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5273" for this suite. • [SLOW TEST:13.042 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":69,"skipped":1155,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:46.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 21 00:38:46.795: INFO: Waiting up to 5m0s for pod "pod-10e363e0-1a14-4866-a201-337ec5bf0825" in namespace "emptydir-5111" to be "success or failure" Aug 21 00:38:46.833: INFO: Pod "pod-10e363e0-1a14-4866-a201-337ec5bf0825": Phase="Pending", Reason="", readiness=false. Elapsed: 38.645455ms Aug 21 00:38:48.924: INFO: Pod "pod-10e363e0-1a14-4866-a201-337ec5bf0825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129021204s Aug 21 00:38:50.930: INFO: Pod "pod-10e363e0-1a14-4866-a201-337ec5bf0825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135182217s STEP: Saw pod success Aug 21 00:38:50.930: INFO: Pod "pod-10e363e0-1a14-4866-a201-337ec5bf0825" satisfied condition "success or failure" Aug 21 00:38:50.935: INFO: Trying to get logs from node jerma-worker pod pod-10e363e0-1a14-4866-a201-337ec5bf0825 container test-container: STEP: delete the pod Aug 21 00:38:51.165: INFO: Waiting for pod pod-10e363e0-1a14-4866-a201-337ec5bf0825 to disappear Aug 21 00:38:51.427: INFO: Pod pod-10e363e0-1a14-4866-a201-337ec5bf0825 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:51.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5111" for this suite. • [SLOW TEST:5.036 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1171,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:51.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1c07087c-ce21-4396-9c34-4b9b906d556d STEP: Creating a pod to test consume configMaps Aug 21 00:38:52.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5" in namespace "projected-321" to be "success or failure" Aug 21 00:38:52.229: INFO: Pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5": Phase="Pending", Reason="", readiness=false. Elapsed: 148.132072ms Aug 21 00:38:54.262: INFO: Pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181008403s Aug 21 00:38:56.267: INFO: Pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186353102s Aug 21 00:38:58.275: INFO: Pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193686937s STEP: Saw pod success Aug 21 00:38:58.275: INFO: Pod "pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5" satisfied condition "success or failure" Aug 21 00:38:58.281: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5 container projected-configmap-volume-test: STEP: delete the pod Aug 21 00:38:58.387: INFO: Waiting for pod pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5 to disappear Aug 21 00:38:58.392: INFO: Pod pod-projected-configmaps-63953b65-b1fa-4b95-9df3-3e4361d39da5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:58.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-321" for this suite. • [SLOW TEST:6.969 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:58.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e025935b-ecd4-462a-bd96-29a2f4d2d854 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:38:58.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-909" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":72,"skipped":1212,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:38:58.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:38:58.587: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d" in namespace "security-context-test-9590" to be "success or failure" Aug 21 00:38:58.596: INFO: Pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769325ms Aug 21 00:39:00.787: INFO: Pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199542993s Aug 21 00:39:02.792: INFO: Pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205222842s Aug 21 00:39:02.792: INFO: Pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d" satisfied condition "success or failure" Aug 21 00:39:02.799: INFO: Got logs for pod "busybox-privileged-false-1c067db6-265a-4272-a9e6-76b601c4153d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:02.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9590" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1230,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:02.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:14.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9218" for this suite. • [SLOW TEST:11.218 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":74,"skipped":1238,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:14.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a7642de7-108b-41e0-b84e-c02582339f18 STEP: Creating a pod to test consume secrets Aug 21 00:39:14.152: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b" in namespace "projected-6819" to be "success or failure" Aug 21 00:39:14.195: INFO: Pod "pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.507769ms Aug 21 00:39:16.221: INFO: Pod "pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069253248s Aug 21 00:39:18.229: INFO: Pod "pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076628112s STEP: Saw pod success Aug 21 00:39:18.229: INFO: Pod "pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b" satisfied condition "success or failure" Aug 21 00:39:18.234: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b container projected-secret-volume-test: STEP: delete the pod Aug 21 00:39:18.261: INFO: Waiting for pod pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b to disappear Aug 21 00:39:18.391: INFO: Pod pod-projected-secrets-b49a2020-c639-4e6b-a33a-84f4f8ea2d2b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:18.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6819" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1238,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:18.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 21 00:39:32.807: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:32.807: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:32.876903 7 log.go:172] (0x40031326e0) (0x40010ec640) Create stream I0821 00:39:32.877112 7 log.go:172] (0x40031326e0) (0x40010ec640) Stream added, broadcasting: 1 I0821 00:39:32.880855 7 log.go:172] (0x40031326e0) Reply frame received for 1 I0821 00:39:32.881133 7 log.go:172] (0x40031326e0) (0x4002cedea0) Create stream I0821 00:39:32.881249 7 log.go:172] (0x40031326e0) (0x4002cedea0) Stream added, broadcasting: 3 I0821 00:39:32.882992 7 log.go:172] (0x40031326e0) Reply frame received for 3 I0821 00:39:32.883157 7 log.go:172] (0x40031326e0) (0x40010ec8c0) Create stream I0821 00:39:32.883234 7 log.go:172] (0x40031326e0) (0x40010ec8c0) Stream added, broadcasting: 5 I0821 00:39:32.884884 7 log.go:172] (0x40031326e0) Reply frame received for 5 I0821 00:39:32.972662 7 log.go:172] (0x40031326e0) Data frame received for 3 I0821 00:39:32.972916 7 log.go:172] (0x4002cedea0) (3) Data frame handling I0821 00:39:32.973008 7 log.go:172] (0x4002cedea0) (3) Data frame sent I0821 00:39:32.973079 7 log.go:172] (0x40031326e0) Data frame received for 3 I0821 00:39:32.973148 7 log.go:172] (0x4002cedea0) (3) Data frame handling I0821 00:39:32.973269 7 log.go:172] (0x40031326e0) Data frame received for 5 I0821 00:39:32.973363 7 log.go:172] (0x40010ec8c0) (5) Data frame handling I0821 00:39:32.973931 7 log.go:172] (0x40031326e0) Data frame received for 1 I0821 00:39:32.974005 7 log.go:172] (0x40010ec640) (1) Data frame handling I0821 00:39:32.974067 7 log.go:172] (0x40010ec640) (1) Data frame sent I0821 00:39:32.974148 7 log.go:172] (0x40031326e0) (0x40010ec640) Stream removed, broadcasting: 1 I0821 00:39:32.974463 7 log.go:172] (0x40031326e0) (0x40010ec640) Stream removed, broadcasting: 1 I0821 00:39:32.974531 7 log.go:172] (0x40031326e0) (0x4002cedea0) Stream removed, broadcasting: 3 I0821 00:39:32.974594 7 log.go:172] (0x40031326e0) (0x40010ec8c0) Stream removed, broadcasting: 5 Aug 21 00:39:32.974: INFO: Exec stderr: "" Aug 21 00:39:32.975: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:32.975: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:32.977017 7 log.go:172] (0x40031326e0) Go away received I0821 00:39:33.025326 7 log.go:172] (0x4002ed86e0) (0x4001b1c280) Create stream I0821 00:39:33.025444 7 log.go:172] (0x4002ed86e0) (0x4001b1c280) Stream added, broadcasting: 1 I0821 00:39:33.035605 7 log.go:172] (0x4002ed86e0) Reply frame received for 1 I0821 00:39:33.035904 7 log.go:172] (0x4002ed86e0) (0x4001b1c320) Create stream I0821 00:39:33.036014 7 log.go:172] (0x4002ed86e0) (0x4001b1c320) Stream added, broadcasting: 3 I0821 00:39:33.038872 7 log.go:172] (0x4002ed86e0) Reply frame received for 3 I0821 00:39:33.039004 7 log.go:172] (0x4002ed86e0) (0x40010ec960) Create stream I0821 00:39:33.039069 7 log.go:172] (0x4002ed86e0) (0x40010ec960) Stream added, broadcasting: 5 I0821 00:39:33.040535 7 log.go:172] (0x4002ed86e0) Reply frame received for 5 I0821 00:39:33.093584 7 log.go:172] (0x4002ed86e0) Data frame received for 5 I0821 00:39:33.093753 7 log.go:172] (0x40010ec960) (5) Data frame handling I0821 00:39:33.093853 7 log.go:172] (0x4002ed86e0) Data frame received for 3 I0821 00:39:33.093968 7 log.go:172] (0x4001b1c320) (3) Data frame handling I0821 00:39:33.094081 7 log.go:172] (0x4001b1c320) (3) Data frame sent I0821 00:39:33.094160 7 log.go:172] (0x4002ed86e0) Data frame received for 3 I0821 00:39:33.094254 7 log.go:172] (0x4001b1c320) (3) Data frame handling I0821 00:39:33.094803 7 log.go:172] (0x4002ed86e0) Data frame received for 1 I0821 00:39:33.094918 7 log.go:172] (0x4001b1c280) (1) Data frame handling I0821 00:39:33.095031 7 log.go:172] (0x4001b1c280) (1) Data frame sent I0821 00:39:33.095137 7 log.go:172] (0x4002ed86e0) (0x4001b1c280) Stream removed, broadcasting: 1 I0821 00:39:33.095254 7 log.go:172] (0x4002ed86e0) Go away received I0821 00:39:33.095541 7 log.go:172] (0x4002ed86e0) (0x4001b1c280) Stream removed, broadcasting: 1 I0821 00:39:33.095661 7 log.go:172] (0x4002ed86e0) (0x4001b1c320) Stream removed, broadcasting: 3 I0821 00:39:33.095777 7 log.go:172] (0x4002ed86e0) (0x40010ec960) Stream removed, broadcasting: 5 Aug 21 00:39:33.095: INFO: Exec stderr: "" Aug 21 00:39:33.096: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.096: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.187475 7 log.go:172] (0x4002ed8dc0) (0x4001b1c960) Create stream I0821 00:39:33.187669 7 log.go:172] (0x4002ed8dc0) (0x4001b1c960) Stream added, broadcasting: 1 I0821 00:39:33.193075 7 log.go:172] (0x4002ed8dc0) Reply frame received for 1 I0821 00:39:33.193380 7 log.go:172] (0x4002ed8dc0) (0x4001b1caa0) Create stream I0821 00:39:33.193518 7 log.go:172] (0x4002ed8dc0) (0x4001b1caa0) Stream added, broadcasting: 3 I0821 00:39:33.195985 7 log.go:172] (0x4002ed8dc0) Reply frame received for 3 I0821 00:39:33.196210 7 log.go:172] (0x4002ed8dc0) (0x40010ecb40) Create stream I0821 00:39:33.196326 7 log.go:172] (0x4002ed8dc0) (0x40010ecb40) Stream added, broadcasting: 5 I0821 00:39:33.198339 7 log.go:172] (0x4002ed8dc0) Reply frame received for 5 I0821 00:39:33.257040 7 log.go:172] (0x4002ed8dc0) Data frame received for 5 I0821 00:39:33.257196 7 log.go:172] (0x40010ecb40) (5) Data frame handling I0821 00:39:33.257352 7 log.go:172] (0x4002ed8dc0) Data frame received for 3 I0821 00:39:33.257507 7 log.go:172] (0x4001b1caa0) (3) Data frame handling I0821 00:39:33.257643 7 log.go:172] (0x4001b1caa0) (3) Data frame sent I0821 00:39:33.257743 7 log.go:172] (0x4002ed8dc0) Data frame received for 3 I0821 00:39:33.257815 7 log.go:172] (0x4001b1caa0) (3) Data frame handling I0821 00:39:33.258310 7 log.go:172] (0x4002ed8dc0) Data frame received for 1 I0821 00:39:33.258390 7 log.go:172] (0x4001b1c960) (1) Data frame handling I0821 00:39:33.258469 7 log.go:172] (0x4001b1c960) (1) Data frame sent I0821 00:39:33.258562 7 log.go:172] (0x4002ed8dc0) (0x4001b1c960) Stream removed, broadcasting: 1 I0821 00:39:33.258672 7 log.go:172] (0x4002ed8dc0) Go away received I0821 00:39:33.258982 7 log.go:172] (0x4002ed8dc0) (0x4001b1c960) Stream removed, broadcasting: 1 I0821 00:39:33.259067 7 log.go:172] (0x4002ed8dc0) (0x4001b1caa0) Stream removed, broadcasting: 3 I0821 00:39:33.259129 7 log.go:172] (0x4002ed8dc0) (0x40010ecb40) Stream removed, broadcasting: 5 Aug 21 00:39:33.259: INFO: Exec stderr: "" Aug 21 00:39:33.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.259: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.314009 7 log.go:172] (0x4002298370) (0x4001224460) Create stream I0821 00:39:33.314199 7 log.go:172] (0x4002298370) (0x4001224460) Stream added, broadcasting: 1 I0821 00:39:33.317131 7 log.go:172] (0x4002298370) Reply frame received for 1 I0821 00:39:33.317309 7 log.go:172] (0x4002298370) (0x4001224960) Create stream I0821 00:39:33.317380 7 log.go:172] (0x4002298370) (0x4001224960) Stream added, broadcasting: 3 I0821 00:39:33.318829 7 log.go:172] (0x4002298370) Reply frame received for 3 I0821 00:39:33.318971 7 log.go:172] (0x4002298370) (0x4000fabd60) Create stream I0821 00:39:33.319034 7 log.go:172] (0x4002298370) (0x4000fabd60) Stream added, broadcasting: 5 I0821 00:39:33.320162 7 log.go:172] (0x4002298370) Reply frame received for 5 I0821 00:39:33.382693 7 log.go:172] (0x4002298370) Data frame received for 5 I0821 00:39:33.382882 7 log.go:172] (0x4000fabd60) (5) Data frame handling I0821 00:39:33.383035 7 log.go:172] (0x4002298370) Data frame received for 3 I0821 00:39:33.383194 7 log.go:172] (0x4001224960) (3) Data frame handling I0821 00:39:33.383320 7 log.go:172] (0x4001224960) (3) Data frame sent I0821 00:39:33.383450 7 log.go:172] (0x4002298370) Data frame received for 3 I0821 00:39:33.383593 7 log.go:172] (0x4001224960) (3) Data frame handling I0821 00:39:33.383889 7 log.go:172] (0x4002298370) Data frame received for 1 I0821 00:39:33.384071 7 log.go:172] (0x4001224460) (1) Data frame handling I0821 00:39:33.384222 7 log.go:172] (0x4001224460) (1) Data frame sent I0821 00:39:33.384415 7 log.go:172] (0x4002298370) (0x4001224460) Stream removed, broadcasting: 1 I0821 00:39:33.384601 7 log.go:172] (0x4002298370) Go away received I0821 00:39:33.385041 7 log.go:172] (0x4002298370) (0x4001224460) Stream removed, broadcasting: 1 I0821 00:39:33.385172 7 log.go:172] (0x4002298370) (0x4001224960) Stream removed, broadcasting: 3 I0821 00:39:33.385314 7 log.go:172] (0x4002298370) (0x4000fabd60) Stream removed, broadcasting: 5 Aug 21 00:39:33.385: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 21 00:39:33.385: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.385: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.442427 7 log.go:172] (0x40022989a0) (0x40012250e0) Create stream I0821 00:39:33.442656 7 log.go:172] (0x40022989a0) (0x40012250e0) Stream added, broadcasting: 1 I0821 00:39:33.446913 7 log.go:172] (0x40022989a0) Reply frame received for 1 I0821 00:39:33.447111 7 log.go:172] (0x40022989a0) (0x4000fabe00) Create stream I0821 00:39:33.447185 7 log.go:172] (0x40022989a0) (0x4000fabe00) Stream added, broadcasting: 3 I0821 00:39:33.448592 7 log.go:172] (0x40022989a0) Reply frame received for 3 I0821 00:39:33.448781 7 log.go:172] (0x40022989a0) (0x4001225360) Create stream I0821 00:39:33.448867 7 log.go:172] (0x40022989a0) (0x4001225360) Stream added, broadcasting: 5 I0821 00:39:33.450212 7 log.go:172] (0x40022989a0) Reply frame received for 5 I0821 00:39:33.507026 7 log.go:172] (0x40022989a0) Data frame received for 5 I0821 00:39:33.507205 7 log.go:172] (0x4001225360) (5) Data frame handling I0821 00:39:33.507447 7 log.go:172] (0x40022989a0) Data frame received for 3 I0821 00:39:33.507591 7 log.go:172] (0x4000fabe00) (3) Data frame handling I0821 00:39:33.507799 7 log.go:172] (0x4000fabe00) (3) Data frame sent I0821 00:39:33.507992 7 log.go:172] (0x40022989a0) Data frame received for 3 I0821 00:39:33.508149 7 log.go:172] (0x4000fabe00) (3) Data frame handling I0821 00:39:33.508414 7 log.go:172] (0x40022989a0) Data frame received for 1 I0821 00:39:33.508563 7 log.go:172] (0x40012250e0) (1) Data frame handling I0821 00:39:33.508838 7 log.go:172] (0x40012250e0) (1) Data frame sent I0821 00:39:33.509009 7 log.go:172] (0x40022989a0) (0x40012250e0) Stream removed, broadcasting: 1 I0821 00:39:33.509190 7 log.go:172] (0x40022989a0) Go away received I0821 00:39:33.509598 7 log.go:172] (0x40022989a0) (0x40012250e0) Stream removed, broadcasting: 1 I0821 00:39:33.509714 7 log.go:172] (0x40022989a0) (0x4000fabe00) Stream removed, broadcasting: 3 I0821 00:39:33.509806 7 log.go:172] (0x40022989a0) (0x4001225360) Stream removed, broadcasting: 5 Aug 21 00:39:33.509: INFO: Exec stderr: "" Aug 21 00:39:33.510: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.510: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.573489 7 log.go:172] (0x4003132d10) (0x40010ed040) Create stream I0821 00:39:33.573690 7 log.go:172] (0x4003132d10) (0x40010ed040) Stream added, broadcasting: 1 I0821 00:39:33.578044 7 log.go:172] (0x4003132d10) Reply frame received for 1 I0821 00:39:33.578301 7 log.go:172] (0x4003132d10) (0x40024b8460) Create stream I0821 00:39:33.578405 7 log.go:172] (0x4003132d10) (0x40024b8460) Stream added, broadcasting: 3 I0821 00:39:33.580178 7 log.go:172] (0x4003132d10) Reply frame received for 3 I0821 00:39:33.580341 7 log.go:172] (0x4003132d10) (0x40024b8500) Create stream I0821 00:39:33.580449 7 log.go:172] (0x4003132d10) (0x40024b8500) Stream added, broadcasting: 5 I0821 00:39:33.581783 7 log.go:172] (0x4003132d10) Reply frame received for 5 I0821 00:39:33.637013 7 log.go:172] (0x4003132d10) Data frame received for 3 I0821 00:39:33.637201 7 log.go:172] (0x40024b8460) (3) Data frame handling I0821 00:39:33.637359 7 log.go:172] (0x4003132d10) Data frame received for 5 I0821 00:39:33.637577 7 log.go:172] (0x40024b8500) (5) Data frame handling I0821 00:39:33.637854 7 log.go:172] (0x40024b8460) (3) Data frame sent I0821 00:39:33.638127 7 log.go:172] (0x4003132d10) Data frame received for 3 I0821 00:39:33.638385 7 log.go:172] (0x40024b8460) (3) Data frame handling I0821 00:39:33.638677 7 log.go:172] (0x4003132d10) Data frame received for 1 I0821 00:39:33.638848 7 log.go:172] (0x40010ed040) (1) Data frame handling I0821 00:39:33.639030 7 log.go:172] (0x40010ed040) (1) Data frame sent I0821 00:39:33.639218 7 log.go:172] (0x4003132d10) (0x40010ed040) Stream removed, broadcasting: 1 I0821 00:39:33.639405 7 log.go:172] (0x4003132d10) Go away received I0821 00:39:33.639906 7 log.go:172] (0x4003132d10) (0x40010ed040) Stream removed, broadcasting: 1 I0821 00:39:33.640018 7 log.go:172] (0x4003132d10) (0x40024b8460) Stream removed, broadcasting: 3 I0821 00:39:33.640110 7 log.go:172] (0x4003132d10) (0x40024b8500) Stream removed, broadcasting: 5 Aug 21 00:39:33.640: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 21 00:39:33.640: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.640: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.708534 7 log.go:172] (0x40024d8370) (0x4001e3e500) Create stream I0821 00:39:33.708710 7 log.go:172] (0x40024d8370) (0x4001e3e500) Stream added, broadcasting: 1 I0821 00:39:33.712703 7 log.go:172] (0x40024d8370) Reply frame received for 1 I0821 00:39:33.712944 7 log.go:172] (0x40024d8370) (0x4001b1cb40) Create stream I0821 00:39:33.713012 7 log.go:172] (0x40024d8370) (0x4001b1cb40) Stream added, broadcasting: 3 I0821 00:39:33.714588 7 log.go:172] (0x40024d8370) Reply frame received for 3 I0821 00:39:33.714718 7 log.go:172] (0x40024d8370) (0x40024b85a0) Create stream I0821 00:39:33.714793 7 log.go:172] (0x40024d8370) (0x40024b85a0) Stream added, broadcasting: 5 I0821 00:39:33.716117 7 log.go:172] (0x40024d8370) Reply frame received for 5 I0821 00:39:33.786726 7 log.go:172] (0x40024d8370) Data frame received for 3 I0821 00:39:33.786879 7 log.go:172] (0x4001b1cb40) (3) Data frame handling I0821 00:39:33.786954 7 log.go:172] (0x40024d8370) Data frame received for 5 I0821 00:39:33.787043 7 log.go:172] (0x40024b85a0) (5) Data frame handling I0821 00:39:33.787111 7 log.go:172] (0x4001b1cb40) (3) Data frame sent I0821 00:39:33.787228 7 log.go:172] (0x40024d8370) Data frame received for 3 I0821 00:39:33.787293 7 log.go:172] (0x4001b1cb40) (3) Data frame handling I0821 00:39:33.788308 7 log.go:172] (0x40024d8370) Data frame received for 1 I0821 00:39:33.788389 7 log.go:172] (0x4001e3e500) (1) Data frame handling I0821 00:39:33.788462 7 log.go:172] (0x4001e3e500) (1) Data frame sent I0821 00:39:33.788560 7 log.go:172] (0x40024d8370) (0x4001e3e500) Stream removed, broadcasting: 1 I0821 00:39:33.788675 7 log.go:172] (0x40024d8370) Go away received I0821 00:39:33.789078 7 log.go:172] (0x40024d8370) (0x4001e3e500) Stream removed, broadcasting: 1 I0821 00:39:33.789161 7 log.go:172] (0x40024d8370) (0x4001b1cb40) Stream removed, broadcasting: 3 I0821 00:39:33.789222 7 log.go:172] (0x40024d8370) (0x40024b85a0) Stream removed, broadcasting: 5 Aug 21 00:39:33.789: INFO: Exec stderr: "" Aug 21 00:39:33.789: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.789: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.845989 7 log.go:172] (0x40024d89a0) (0x4001e3e6e0) Create stream I0821 00:39:33.846125 7 log.go:172] (0x40024d89a0) (0x4001e3e6e0) Stream added, broadcasting: 1 I0821 00:39:33.849583 7 log.go:172] (0x40024d89a0) Reply frame received for 1 I0821 00:39:33.849892 7 log.go:172] (0x40024d89a0) (0x4001b1cc80) Create stream I0821 00:39:33.850027 7 log.go:172] (0x40024d89a0) (0x4001b1cc80) Stream added, broadcasting: 3 I0821 00:39:33.851714 7 log.go:172] (0x40024d89a0) Reply frame received for 3 I0821 00:39:33.851900 7 log.go:172] (0x40024d89a0) (0x4001e3e780) Create stream I0821 00:39:33.851995 7 log.go:172] (0x40024d89a0) (0x4001e3e780) Stream added, broadcasting: 5 I0821 00:39:33.853577 7 log.go:172] (0x40024d89a0) Reply frame received for 5 I0821 00:39:33.919426 7 log.go:172] (0x40024d89a0) Data frame received for 5 I0821 00:39:33.919569 7 log.go:172] (0x4001e3e780) (5) Data frame handling I0821 00:39:33.919747 7 log.go:172] (0x40024d89a0) Data frame received for 3 I0821 00:39:33.919894 7 log.go:172] (0x4001b1cc80) (3) Data frame handling I0821 00:39:33.919981 7 log.go:172] (0x4001b1cc80) (3) Data frame sent I0821 00:39:33.920042 7 log.go:172] (0x40024d89a0) Data frame received for 3 I0821 00:39:33.920092 7 log.go:172] (0x4001b1cc80) (3) Data frame handling I0821 00:39:33.920618 7 log.go:172] (0x40024d89a0) Data frame received for 1 I0821 00:39:33.920699 7 log.go:172] (0x4001e3e6e0) (1) Data frame handling I0821 00:39:33.920841 7 log.go:172] (0x4001e3e6e0) (1) Data frame sent I0821 00:39:33.920920 7 log.go:172] (0x40024d89a0) (0x4001e3e6e0) Stream removed, broadcasting: 1 I0821 00:39:33.921027 7 log.go:172] (0x40024d89a0) Go away received I0821 00:39:33.921425 7 log.go:172] (0x40024d89a0) (0x4001e3e6e0) Stream removed, broadcasting: 1 I0821 00:39:33.921589 7 log.go:172] (0x40024d89a0) (0x4001b1cc80) Stream removed, broadcasting: 3 I0821 00:39:33.921693 7 log.go:172] (0x40024d89a0) (0x4001e3e780) Stream removed, broadcasting: 5 Aug 21 00:39:33.921: INFO: Exec stderr: "" Aug 21 00:39:33.921: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:33.922: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:33.973569 7 log.go:172] (0x4002ed93f0) (0x4001b1cfa0) Create stream I0821 00:39:33.973698 7 log.go:172] (0x4002ed93f0) (0x4001b1cfa0) Stream added, broadcasting: 1 I0821 00:39:33.977925 7 log.go:172] (0x4002ed93f0) Reply frame received for 1 I0821 00:39:33.978111 7 log.go:172] (0x4002ed93f0) (0x4001b1d0e0) Create stream I0821 00:39:33.978213 7 log.go:172] (0x4002ed93f0) (0x4001b1d0e0) Stream added, broadcasting: 3 I0821 00:39:33.979736 7 log.go:172] (0x4002ed93f0) Reply frame received for 3 I0821 00:39:33.979872 7 log.go:172] (0x4002ed93f0) (0x4001e3e8c0) Create stream I0821 00:39:33.979945 7 log.go:172] (0x4002ed93f0) (0x4001e3e8c0) Stream added, broadcasting: 5 I0821 00:39:33.981585 7 log.go:172] (0x4002ed93f0) Reply frame received for 5 I0821 00:39:34.037913 7 log.go:172] (0x4002ed93f0) Data frame received for 3 I0821 00:39:34.038113 7 log.go:172] (0x4001b1d0e0) (3) Data frame handling I0821 00:39:34.038251 7 log.go:172] (0x4001b1d0e0) (3) Data frame sent I0821 00:39:34.038388 7 log.go:172] (0x4002ed93f0) Data frame received for 3 I0821 00:39:34.038496 7 log.go:172] (0x4001b1d0e0) (3) Data frame handling I0821 00:39:34.038601 7 log.go:172] (0x4002ed93f0) Data frame received for 5 I0821 00:39:34.038719 7 log.go:172] (0x4001e3e8c0) (5) Data frame handling I0821 00:39:34.039441 7 log.go:172] (0x4002ed93f0) Data frame received for 1 I0821 00:39:34.039573 7 log.go:172] (0x4001b1cfa0) (1) Data frame handling I0821 00:39:34.039702 7 log.go:172] (0x4001b1cfa0) (1) Data frame sent I0821 00:39:34.039827 7 log.go:172] (0x4002ed93f0) (0x4001b1cfa0) Stream removed, broadcasting: 1 I0821 00:39:34.039942 7 log.go:172] (0x4002ed93f0) Go away received I0821 00:39:34.040360 7 log.go:172] (0x4002ed93f0) (0x4001b1cfa0) Stream removed, broadcasting: 1 I0821 00:39:34.040527 7 log.go:172] (0x4002ed93f0) (0x4001b1d0e0) Stream removed, broadcasting: 3 I0821 00:39:34.040615 7 log.go:172] (0x4002ed93f0) (0x4001e3e8c0) Stream removed, broadcasting: 5 Aug 21 00:39:34.040: INFO: Exec stderr: "" Aug 21 00:39:34.040: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-689 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:39:34.041: INFO: >>> kubeConfig: /root/.kube/config I0821 00:39:34.097944 7 log.go:172] (0x400304e580) (0x40024b8820) Create stream I0821 00:39:34.098105 7 log.go:172] (0x400304e580) (0x40024b8820) Stream added, broadcasting: 1 I0821 00:39:34.105186 7 log.go:172] (0x400304e580) Reply frame received for 1 I0821 00:39:34.105406 7 log.go:172] (0x400304e580) (0x4001225540) Create stream I0821 00:39:34.105506 7 log.go:172] (0x400304e580) (0x4001225540) Stream added, broadcasting: 3 I0821 00:39:34.107462 7 log.go:172] (0x400304e580) Reply frame received for 3 I0821 00:39:34.107590 7 log.go:172] (0x400304e580) (0x40024b8aa0) Create stream I0821 00:39:34.107664 7 log.go:172] (0x400304e580) (0x40024b8aa0) Stream added, broadcasting: 5 I0821 00:39:34.108870 7 log.go:172] (0x400304e580) Reply frame received for 5 I0821 00:39:34.173514 7 log.go:172] (0x400304e580) Data frame received for 3 I0821 00:39:34.173661 7 log.go:172] (0x4001225540) (3) Data frame handling I0821 00:39:34.173767 7 log.go:172] (0x400304e580) Data frame received for 5 I0821 00:39:34.173930 7 log.go:172] (0x40024b8aa0) (5) Data frame handling I0821 00:39:34.174038 7 log.go:172] (0x4001225540) (3) Data frame sent I0821 00:39:34.174177 7 log.go:172] (0x400304e580) Data frame received for 3 I0821 00:39:34.174299 7 log.go:172] (0x4001225540) (3) Data frame handling I0821 00:39:34.174549 7 log.go:172] (0x400304e580) Data frame received for 1 I0821 00:39:34.174669 7 log.go:172] (0x40024b8820) (1) Data frame handling I0821 00:39:34.174777 7 log.go:172] (0x40024b8820) (1) Data frame sent I0821 00:39:34.174882 7 log.go:172] (0x400304e580) (0x40024b8820) Stream removed, broadcasting: 1 I0821 00:39:34.175021 7 log.go:172] (0x400304e580) Go away received I0821 00:39:34.175449 7 log.go:172] (0x400304e580) (0x40024b8820) Stream removed, broadcasting: 1 I0821 00:39:34.175585 7 log.go:172] (0x400304e580) (0x4001225540) Stream removed, broadcasting: 3 I0821 00:39:34.175686 7 log.go:172] (0x400304e580) (0x40024b8aa0) Stream removed, broadcasting: 5 Aug 21 00:39:34.175: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:34.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-689" for this suite. • [SLOW TEST:15.784 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1241,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:34.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-434eb495-fedc-480f-b878-4f9680471f56 STEP: Creating a pod to test consume configMaps Aug 21 00:39:34.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85" in namespace "configmap-1620" to be "success or failure" Aug 21 00:39:34.467: INFO: Pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85": Phase="Pending", Reason="", readiness=false. Elapsed: 61.050882ms Aug 21 00:39:36.475: INFO: Pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068837035s Aug 21 00:39:38.569: INFO: Pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162944725s Aug 21 00:39:40.655: INFO: Pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248614686s STEP: Saw pod success Aug 21 00:39:40.655: INFO: Pod "pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85" satisfied condition "success or failure" Aug 21 00:39:41.296: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85 container configmap-volume-test: STEP: delete the pod Aug 21 00:39:41.646: INFO: Waiting for pod pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85 to disappear Aug 21 00:39:41.656: INFO: Pod pod-configmaps-2bc35b63-918d-4fe1-a900-fdf881d29f85 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:41.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1620" for this suite. • [SLOW TEST:7.492 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1245,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:41.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:39:47.017: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:39:49.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567186, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:39:51.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567187, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567186, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:39:54.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:39:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2045" for this suite. STEP: Destroying namespace "webhook-2045-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.925 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":78,"skipped":1254,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:39:54.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:39:57.861: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:40:00.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567197, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567197, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567197, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567197, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:40:03.070: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:03.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5296" for this suite. STEP: Destroying namespace "webhook-5296-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.685 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":79,"skipped":1255,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:03.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8357 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 00:40:03.399: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 21 00:40:27.955: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.6:8080/dial?request=hostname&protocol=udp&host=10.244.2.243&port=8081&tries=1'] Namespace:pod-network-test-8357 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:40:27.955: INFO: >>> kubeConfig: /root/.kube/config I0821 00:40:28.009733 7 log.go:172] (0x4002b0a370) (0x40019ccbe0) Create stream I0821 00:40:28.009911 7 log.go:172] (0x4002b0a370) (0x40019ccbe0) Stream added, broadcasting: 1 I0821 00:40:28.012561 7 log.go:172] (0x4002b0a370) Reply frame received for 1 I0821 00:40:28.012840 7 log.go:172] (0x4002b0a370) (0x40010ec960) Create stream I0821 00:40:28.012940 7 log.go:172] (0x4002b0a370) (0x40010ec960) Stream added, broadcasting: 3 I0821 00:40:28.014421 7 log.go:172] (0x4002b0a370) Reply frame received for 3 I0821 00:40:28.014543 7 log.go:172] (0x4002b0a370) (0x40019cce60) Create stream I0821 00:40:28.014613 7 log.go:172] (0x4002b0a370) (0x40019cce60) Stream added, broadcasting: 5 I0821 00:40:28.016177 7 log.go:172] (0x4002b0a370) Reply frame received for 5 I0821 00:40:28.245269 7 log.go:172] (0x4002b0a370) Data frame received for 3 I0821 00:40:28.245494 7 log.go:172] (0x40010ec960) (3) Data frame handling I0821 00:40:28.245643 7 log.go:172] (0x40010ec960) (3) Data frame sent I0821 00:40:28.245765 7 log.go:172] (0x4002b0a370) Data frame received for 3 I0821 00:40:28.245890 7 log.go:172] (0x40010ec960) (3) Data frame handling I0821 00:40:28.246101 7 log.go:172] (0x4002b0a370) Data frame received for 5 I0821 00:40:28.246267 7 log.go:172] (0x40019cce60) (5) Data frame handling I0821 00:40:28.247653 7 log.go:172] (0x4002b0a370) Data frame received for 1 I0821 00:40:28.247755 7 log.go:172] (0x40019ccbe0) (1) Data frame handling I0821 00:40:28.247888 7 log.go:172] (0x40019ccbe0) (1) Data frame sent I0821 00:40:28.248055 7 log.go:172] (0x4002b0a370) (0x40019ccbe0) Stream removed, broadcasting: 1 I0821 00:40:28.248234 7 log.go:172] (0x4002b0a370) Go away received I0821 00:40:28.248861 7 log.go:172] (0x4002b0a370) (0x40019ccbe0) Stream removed, broadcasting: 1 I0821 00:40:28.249047 7 log.go:172] (0x4002b0a370) (0x40010ec960) Stream removed, broadcasting: 3 I0821 00:40:28.249161 7 log.go:172] (0x4002b0a370) (0x40019cce60) Stream removed, broadcasting: 5 Aug 21 00:40:28.250: INFO: Waiting for responses: map[] Aug 21 00:40:28.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.6:8080/dial?request=hostname&protocol=udp&host=10.244.1.5&port=8081&tries=1'] Namespace:pod-network-test-8357 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:40:28.255: INFO: >>> kubeConfig: /root/.kube/config I0821 00:40:28.314716 7 log.go:172] (0x40024d82c0) (0x40013a2820) Create stream I0821 00:40:28.314944 7 log.go:172] (0x40024d82c0) (0x40013a2820) Stream added, broadcasting: 1 I0821 00:40:28.318503 7 log.go:172] (0x40024d82c0) Reply frame received for 1 I0821 00:40:28.318721 7 log.go:172] (0x40024d82c0) (0x40019cd180) Create stream I0821 00:40:28.318838 7 log.go:172] (0x40024d82c0) (0x40019cd180) Stream added, broadcasting: 3 I0821 00:40:28.320643 7 log.go:172] (0x40024d82c0) Reply frame received for 3 I0821 00:40:28.320851 7 log.go:172] (0x40024d82c0) (0x4002cec000) Create stream I0821 00:40:28.320933 7 log.go:172] (0x40024d82c0) (0x4002cec000) Stream added, broadcasting: 5 I0821 00:40:28.322270 7 log.go:172] (0x40024d82c0) Reply frame received for 5 I0821 00:40:28.381149 7 log.go:172] (0x40024d82c0) Data frame received for 3 I0821 00:40:28.381419 7 log.go:172] (0x40019cd180) (3) Data frame handling I0821 00:40:28.381670 7 log.go:172] (0x40019cd180) (3) Data frame sent I0821 00:40:28.381834 7 log.go:172] (0x40024d82c0) Data frame received for 3 I0821 00:40:28.381994 7 log.go:172] (0x40019cd180) (3) Data frame handling I0821 00:40:28.382234 7 log.go:172] (0x40024d82c0) Data frame received for 5 I0821 00:40:28.382388 7 log.go:172] (0x4002cec000) (5) Data frame handling I0821 00:40:28.383630 7 log.go:172] (0x40024d82c0) Data frame received for 1 I0821 00:40:28.383736 7 log.go:172] (0x40013a2820) (1) Data frame handling I0821 00:40:28.383835 7 log.go:172] (0x40013a2820) (1) Data frame sent I0821 00:40:28.383929 7 log.go:172] (0x40024d82c0) (0x40013a2820) Stream removed, broadcasting: 1 I0821 00:40:28.384043 7 log.go:172] (0x40024d82c0) Go away received I0821 00:40:28.384377 7 log.go:172] (0x40024d82c0) (0x40013a2820) Stream removed, broadcasting: 1 I0821 00:40:28.384449 7 log.go:172] (0x40024d82c0) (0x40019cd180) Stream removed, broadcasting: 3 I0821 00:40:28.384509 7 log.go:172] (0x40024d82c0) (0x4002cec000) Stream removed, broadcasting: 5 Aug 21 00:40:28.384: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:28.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8357" for this suite. • [SLOW TEST:25.106 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:28.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:40:28.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2" in namespace "downward-api-7961" to be "success or failure" Aug 21 00:40:28.693: INFO: Pod "downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2": Phase="Pending", Reason="", readiness=false. Elapsed: 91.322352ms Aug 21 00:40:30.698: INFO: Pod "downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097030972s Aug 21 00:40:32.704: INFO: Pod "downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103109338s STEP: Saw pod success Aug 21 00:40:32.705: INFO: Pod "downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2" satisfied condition "success or failure" Aug 21 00:40:32.708: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2 container client-container: STEP: delete the pod Aug 21 00:40:32.738: INFO: Waiting for pod downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2 to disappear Aug 21 00:40:32.742: INFO: Pod downwardapi-volume-a7364a6f-069e-4332-8ca5-d80712b08de2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:32.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7961" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1290,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:32.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-cbd6b810-f5d0-427e-b130-d66ab6cb5d67 STEP: Creating a pod to test consume secrets Aug 21 00:40:32.922: INFO: Waiting up to 5m0s for pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df" in namespace "secrets-6633" to be "success or failure" Aug 21 00:40:32.927: INFO: Pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.747568ms Aug 21 00:40:34.932: INFO: Pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00992686s Aug 21 00:40:36.939: INFO: Pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016701051s Aug 21 00:40:38.945: INFO: Pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023127218s STEP: Saw pod success Aug 21 00:40:38.945: INFO: Pod "pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df" satisfied condition "success or failure" Aug 21 00:40:38.949: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df container secret-volume-test: STEP: delete the pod Aug 21 00:40:38.989: INFO: Waiting for pod pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df to disappear Aug 21 00:40:38.998: INFO: Pod pod-secrets-3141a18b-d92a-45a8-8d38-46c8e6d168df no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:38.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6633" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1294,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:39.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:40:41.511: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:40:43.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567241, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567241, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567241, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567241, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:40:46.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:40:46.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2746-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-894" for this suite. STEP: Destroying namespace "webhook-894-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.937 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":83,"skipped":1301,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:47.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Aug 21 00:40:52.111: INFO: Pod pod-hostip-19fd6c84-c229-450b-b975-9f6480174948 has hostIP: 172.18.0.3 [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:52.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6417" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1312,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:52.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-95c85029-d367-49a3-97a9-0ffb3641a73d [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:40:52.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3490" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":85,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:40:52.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 00:40:52.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9118' Aug 21 00:40:53.665: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 00:40:53.665: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Aug 21 00:40:53.677: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4xwhz] Aug 21 00:40:53.677: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4xwhz" in namespace "kubectl-9118" to be "running and ready" Aug 21 00:40:53.680: INFO: Pod "e2e-test-httpd-rc-4xwhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991543ms Aug 21 00:40:55.685: INFO: Pod "e2e-test-httpd-rc-4xwhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007942167s Aug 21 00:40:57.757: INFO: Pod "e2e-test-httpd-rc-4xwhz": Phase="Running", Reason="", readiness=true. Elapsed: 4.079857559s Aug 21 00:40:57.757: INFO: Pod "e2e-test-httpd-rc-4xwhz" satisfied condition "running and ready" Aug 21 00:40:57.758: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4xwhz] Aug 21 00:40:57.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9118' Aug 21 00:40:59.082: INFO: stderr: "" Aug 21 00:40:59.082: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.247. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.247. Set the 'ServerName' directive globally to suppress this message\n[Fri Aug 21 00:40:55.733260 2020] [mpm_event:notice] [pid 1:tid 139718291860328] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Aug 21 00:40:55.733303 2020] [core:notice] [pid 1:tid 139718291860328] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Aug 21 00:40:59.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9118' Aug 21 00:41:00.359: INFO: stderr: "" Aug 21 00:41:00.359: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:41:00.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9118" for this suite. • [SLOW TEST:8.135 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":86,"skipped":1351,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:41:00.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:41:00.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8" in namespace "downward-api-1829" to be "success or failure" Aug 21 00:41:00.552: INFO: Pod "downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8": Phase="Pending", Reason="", readiness=false. Elapsed: 76.505941ms Aug 21 00:41:02.578: INFO: Pod "downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103335123s Aug 21 00:41:04.590: INFO: Pod "downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114618813s STEP: Saw pod success Aug 21 00:41:04.590: INFO: Pod "downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8" satisfied condition "success or failure" Aug 21 00:41:04.595: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8 container client-container: STEP: delete the pod Aug 21 00:41:04.825: INFO: Waiting for pod downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8 to disappear Aug 21 00:41:04.881: INFO: Pod downwardapi-volume-4aab7744-4449-4594-a4c3-41eebd395df8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:41:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1829" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:41:04.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-32211078-e3d1-41f6-9fd0-86e6c3d887b5 STEP: Creating configMap with name cm-test-opt-upd-2a34c8b9-ba19-488c-8df0-7b2fd141a70f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-32211078-e3d1-41f6-9fd0-86e6c3d887b5 STEP: Updating configmap cm-test-opt-upd-2a34c8b9-ba19-488c-8df0-7b2fd141a70f STEP: Creating configMap with name cm-test-opt-create-4e681648-d9c6-42d6-b291-b832e39cf755 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:42:38.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9282" for this suite. • [SLOW TEST:93.499 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1387,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:42:38.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a5c2ff1a-70c4-4be1-b59c-66c29a6b2339 STEP: Creating a pod to test consume secrets Aug 21 00:42:38.707: INFO: Waiting up to 5m0s for pod "pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819" in namespace "secrets-6884" to be "success or failure" Aug 21 00:42:38.758: INFO: Pod "pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819": Phase="Pending", Reason="", readiness=false. Elapsed: 51.020118ms Aug 21 00:42:40.765: INFO: Pod "pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05824651s Aug 21 00:42:42.772: INFO: Pod "pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064450947s STEP: Saw pod success Aug 21 00:42:42.772: INFO: Pod "pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819" satisfied condition "success or failure" Aug 21 00:42:42.776: INFO: Trying to get logs from node jerma-worker pod pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819 container secret-volume-test: STEP: delete the pod Aug 21 00:42:42.811: INFO: Waiting for pod pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819 to disappear Aug 21 00:42:42.815: INFO: Pod pod-secrets-731c8365-0f13-41e5-abe2-af0ed65ee819 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:42:42.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6884" for this suite. STEP: Destroying namespace "secret-namespace-6727" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1395,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:42:42.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:42:42.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42" in namespace "projected-6512" to be "success or failure" Aug 21 00:42:42.998: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42": Phase="Pending", Reason="", readiness=false. Elapsed: 21.875509ms Aug 21 00:42:45.089: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11328469s Aug 21 00:42:47.096: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120361181s Aug 21 00:42:49.103: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42": Phase="Running", Reason="", readiness=true. Elapsed: 6.127062672s Aug 21 00:42:51.110: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134114109s STEP: Saw pod success Aug 21 00:42:51.111: INFO: Pod "downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42" satisfied condition "success or failure" Aug 21 00:42:51.122: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42 container client-container: STEP: delete the pod Aug 21 00:42:51.183: INFO: Waiting for pod downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42 to disappear Aug 21 00:42:51.194: INFO: Pod downwardapi-volume-1c02be37-0400-4735-b5a5-071e7e067f42 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:42:51.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6512" for this suite. • [SLOW TEST:8.339 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1406,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:42:51.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 21 00:42:51.338: INFO: Waiting up to 5m0s for pod "pod-96796627-1bee-492a-84e7-1f8d4e72fc64" in namespace "emptydir-1095" to be "success or failure" Aug 21 00:42:51.343: INFO: Pod "pod-96796627-1bee-492a-84e7-1f8d4e72fc64": Phase="Pending", Reason="", readiness=false. Elapsed: 5.642951ms Aug 21 00:42:53.350: INFO: Pod "pod-96796627-1bee-492a-84e7-1f8d4e72fc64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011991557s Aug 21 00:42:55.356: INFO: Pod "pod-96796627-1bee-492a-84e7-1f8d4e72fc64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018449783s STEP: Saw pod success Aug 21 00:42:55.357: INFO: Pod "pod-96796627-1bee-492a-84e7-1f8d4e72fc64" satisfied condition "success or failure" Aug 21 00:42:55.361: INFO: Trying to get logs from node jerma-worker pod pod-96796627-1bee-492a-84e7-1f8d4e72fc64 container test-container: STEP: delete the pod Aug 21 00:42:55.399: INFO: Waiting for pod pod-96796627-1bee-492a-84e7-1f8d4e72fc64 to disappear Aug 21 00:42:55.423: INFO: Pod pod-96796627-1bee-492a-84e7-1f8d4e72fc64 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:42:55.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1095" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1414,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:42:55.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 21 00:43:00.405: INFO: Successfully updated pod "annotationupdate37ac10f3-e757-49b0-ac92-5485e8ad1a51" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:43:04.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4626" for this suite. • [SLOW TEST:9.031 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1416,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:43:04.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 21 00:43:04.540: INFO: Waiting up to 5m0s for pod "pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605" in namespace "emptydir-1111" to be "success or failure" Aug 21 00:43:04.554: INFO: Pod "pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605": Phase="Pending", Reason="", readiness=false. Elapsed: 13.566326ms Aug 21 00:43:06.569: INFO: Pod "pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028204509s Aug 21 00:43:08.575: INFO: Pod "pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03412272s STEP: Saw pod success Aug 21 00:43:08.575: INFO: Pod "pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605" satisfied condition "success or failure" Aug 21 00:43:08.590: INFO: Trying to get logs from node jerma-worker2 pod pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605 container test-container: STEP: delete the pod Aug 21 00:43:08.669: INFO: Waiting for pod pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605 to disappear Aug 21 00:43:08.739: INFO: Pod pod-c9a7dd18-f351-4a5f-aaf7-bdb86822e605 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:43:08.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1111" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1420,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:43:08.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:43:09.416: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 21 00:43:09.447: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:09.463: INFO: Number of nodes with available pods: 0 Aug 21 00:43:09.464: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:10.477: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:10.484: INFO: Number of nodes with available pods: 0 Aug 21 00:43:10.484: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:11.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:11.764: INFO: Number of nodes with available pods: 0 Aug 21 00:43:11.764: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:12.476: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:12.483: INFO: Number of nodes with available pods: 0 Aug 21 00:43:12.483: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:13.484: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:13.489: INFO: Number of nodes with available pods: 0 Aug 21 00:43:13.490: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:14.472: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:14.477: INFO: Number of nodes with available pods: 0 Aug 21 00:43:14.477: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:15.475: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:15.480: INFO: Number of nodes with available pods: 2 Aug 21 00:43:15.481: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 21 00:43:15.568: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:15.568: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:15.601: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:16.609: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:16.609: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:16.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:17.610: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:17.611: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:17.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:18.609: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:18.610: INFO: Pod daemon-set-8cfzt is not available Aug 21 00:43:18.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:18.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:19.610: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:19.610: INFO: Pod daemon-set-8cfzt is not available Aug 21 00:43:19.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:19.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:20.628: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:20.628: INFO: Pod daemon-set-8cfzt is not available Aug 21 00:43:20.628: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:20.637: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:21.608: INFO: Wrong image for pod: daemon-set-8cfzt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:21.609: INFO: Pod daemon-set-8cfzt is not available Aug 21 00:43:21.609: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:21.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:22.610: INFO: Pod daemon-set-4724b is not available Aug 21 00:43:22.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:22.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:23.610: INFO: Pod daemon-set-4724b is not available Aug 21 00:43:23.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:23.621: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:24.609: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:24.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:25.651: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:25.651: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:25.660: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:26.609: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:26.609: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:26.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:27.611: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:27.611: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:27.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:28.609: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:28.609: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:28.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:29.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:29.610: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:29.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:30.610: INFO: Wrong image for pod: daemon-set-jfwhg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 21 00:43:30.610: INFO: Pod daemon-set-jfwhg is not available Aug 21 00:43:30.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:31.622: INFO: Pod daemon-set-6rjkz is not available Aug 21 00:43:31.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 21 00:43:31.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:32.005: INFO: Number of nodes with available pods: 1 Aug 21 00:43:32.005: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:33.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:33.021: INFO: Number of nodes with available pods: 1 Aug 21 00:43:33.021: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:34.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:34.022: INFO: Number of nodes with available pods: 1 Aug 21 00:43:34.022: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:35.014: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:35.019: INFO: Number of nodes with available pods: 1 Aug 21 00:43:35.019: INFO: Node jerma-worker is running more than one daemon pod Aug 21 00:43:36.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 00:43:36.038: INFO: Number of nodes with available pods: 2 Aug 21 00:43:36.038: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6603, will wait for the garbage collector to delete the pods Aug 21 00:43:36.126: INFO: Deleting DaemonSet.extensions daemon-set took: 7.202659ms Aug 21 00:43:36.527: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.94738ms Aug 21 00:43:51.938: INFO: Number of nodes with available pods: 0 Aug 21 00:43:51.938: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 00:43:51.942: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6603/daemonsets","resourceVersion":"1981863"},"items":null} Aug 21 00:43:51.945: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6603/pods","resourceVersion":"1981863"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:43:51.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6603" for this suite. • [SLOW TEST:43.215 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":94,"skipped":1424,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:43:51.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 21 00:43:52.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5488' Aug 21 00:43:57.276: INFO: stderr: "" Aug 21 00:43:57.277: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 00:43:57.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5488' Aug 21 00:43:58.549: INFO: stderr: "" Aug 21 00:43:58.549: INFO: stdout: "update-demo-nautilus-kzknb update-demo-nautilus-sr7hz " Aug 21 00:43:58.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kzknb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5488' Aug 21 00:43:59.887: INFO: stderr: "" Aug 21 00:43:59.887: INFO: stdout: "" Aug 21 00:43:59.888: INFO: update-demo-nautilus-kzknb is created but not running Aug 21 00:44:04.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5488' Aug 21 00:44:06.166: INFO: stderr: "" Aug 21 00:44:06.167: INFO: stdout: "update-demo-nautilus-kzknb update-demo-nautilus-sr7hz " Aug 21 00:44:06.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kzknb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5488' Aug 21 00:44:07.418: INFO: stderr: "" Aug 21 00:44:07.418: INFO: stdout: "true" Aug 21 00:44:07.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kzknb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5488' Aug 21 00:44:08.707: INFO: stderr: "" Aug 21 00:44:08.707: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 00:44:08.708: INFO: validating pod update-demo-nautilus-kzknb Aug 21 00:44:08.731: INFO: got data: { "image": "nautilus.jpg" } Aug 21 00:44:08.732: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 00:44:08.732: INFO: update-demo-nautilus-kzknb is verified up and running Aug 21 00:44:08.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sr7hz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5488' Aug 21 00:44:09.978: INFO: stderr: "" Aug 21 00:44:09.978: INFO: stdout: "true" Aug 21 00:44:09.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sr7hz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5488' Aug 21 00:44:11.221: INFO: stderr: "" Aug 21 00:44:11.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 00:44:11.222: INFO: validating pod update-demo-nautilus-sr7hz Aug 21 00:44:11.227: INFO: got data: { "image": "nautilus.jpg" } Aug 21 00:44:11.228: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 00:44:11.228: INFO: update-demo-nautilus-sr7hz is verified up and running STEP: using delete to clean up resources Aug 21 00:44:11.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5488' Aug 21 00:44:12.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 00:44:12.523: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 21 00:44:12.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5488' Aug 21 00:44:13.814: INFO: stderr: "No resources found in kubectl-5488 namespace.\n" Aug 21 00:44:13.814: INFO: stdout: "" Aug 21 00:44:13.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5488 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 00:44:15.123: INFO: stderr: "" Aug 21 00:44:15.124: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:15.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5488" for this suite. • [SLOW TEST:23.169 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":95,"skipped":1439,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:15.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 21 00:44:15.279: INFO: Waiting up to 5m0s for pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711" in namespace "emptydir-9837" to be "success or failure" Aug 21 00:44:15.300: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711": Phase="Pending", Reason="", readiness=false. Elapsed: 20.170686ms Aug 21 00:44:17.453: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173956681s Aug 21 00:44:19.460: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181005613s Aug 21 00:44:21.484: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711": Phase="Running", Reason="", readiness=true. Elapsed: 6.204145292s Aug 21 00:44:23.490: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.210908966s STEP: Saw pod success Aug 21 00:44:23.491: INFO: Pod "pod-9d3f2949-7a98-44c3-896d-0a43266f9711" satisfied condition "success or failure" Aug 21 00:44:23.495: INFO: Trying to get logs from node jerma-worker pod pod-9d3f2949-7a98-44c3-896d-0a43266f9711 container test-container: STEP: delete the pod Aug 21 00:44:24.187: INFO: Waiting for pod pod-9d3f2949-7a98-44c3-896d-0a43266f9711 to disappear Aug 21 00:44:24.191: INFO: Pod pod-9d3f2949-7a98-44c3-896d-0a43266f9711 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:24.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9837" for this suite. • [SLOW TEST:9.064 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1442,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:24.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-290d2424-ed81-4bf2-8562-2024da850e1a STEP: Creating a pod to test consume secrets Aug 21 00:44:24.786: INFO: Waiting up to 5m0s for pod "pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19" in namespace "secrets-6980" to be "success or failure" Aug 21 00:44:24.842: INFO: Pod "pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19": Phase="Pending", Reason="", readiness=false. Elapsed: 55.6128ms Aug 21 00:44:26.872: INFO: Pod "pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086175737s Aug 21 00:44:28.987: INFO: Pod "pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200586203s STEP: Saw pod success Aug 21 00:44:28.987: INFO: Pod "pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19" satisfied condition "success or failure" Aug 21 00:44:28.991: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19 container secret-volume-test: STEP: delete the pod Aug 21 00:44:29.287: INFO: Waiting for pod pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19 to disappear Aug 21 00:44:29.329: INFO: Pod pod-secrets-8e7512ca-0e8d-4955-9192-f9299e386a19 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:29.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6980" for this suite. • [SLOW TEST:5.148 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:29.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 21 00:44:35.052: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:35.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3319" for this suite. • [SLOW TEST:6.124 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":98,"skipped":1474,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:35.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629 [It] should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 00:44:35.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8954' Aug 21 00:44:36.981: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 00:44:36.981: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 Aug 21 00:44:39.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8954' Aug 21 00:44:40.818: INFO: stderr: "" Aug 21 00:44:40.818: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:40.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8954" for this suite. • [SLOW TEST:5.375 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625 should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":99,"skipped":1486,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:40.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Aug 21 00:44:41.126: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix734681879/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:42.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1393" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":100,"skipped":1498,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:42.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:44:53.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3550" for this suite. • [SLOW TEST:11.704 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":101,"skipped":1518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:44:53.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 21 00:45:02.035: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 00:45:02.061: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 00:45:04.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 00:45:04.068: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 00:45:06.062: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 00:45:06.072: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:45:06.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-989" for this suite. • [SLOW TEST:12.223 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:45:06.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-9258f4c7-71b7-43d8-a958-9cb8fa817b68 STEP: Creating a pod to test consume secrets Aug 21 00:45:06.224: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2" in namespace "projected-756" to be "success or failure" Aug 21 00:45:06.268: INFO: Pod "pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.658448ms Aug 21 00:45:08.276: INFO: Pod "pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051596688s Aug 21 00:45:10.282: INFO: Pod "pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058278964s STEP: Saw pod success Aug 21 00:45:10.283: INFO: Pod "pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2" satisfied condition "success or failure" Aug 21 00:45:10.287: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2 container projected-secret-volume-test: STEP: delete the pod Aug 21 00:45:10.333: INFO: Waiting for pod pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2 to disappear Aug 21 00:45:10.579: INFO: Pod pod-projected-secrets-26ba48c6-9452-47ee-94b5-69a05614dfe2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:45:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-756" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1563,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:45:10.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-52153485-35ea-46d2-a3ee-fe0da9644916 STEP: Creating a pod to test consume secrets Aug 21 00:45:10.829: INFO: Waiting up to 5m0s for pod "pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81" in namespace "secrets-243" to be "success or failure" Aug 21 00:45:10.833: INFO: Pod "pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81": Phase="Pending", Reason="", readiness=false. Elapsed: 3.48995ms Aug 21 00:45:13.012: INFO: Pod "pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182349637s Aug 21 00:45:15.018: INFO: Pod "pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188454648s STEP: Saw pod success Aug 21 00:45:15.018: INFO: Pod "pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81" satisfied condition "success or failure" Aug 21 00:45:15.023: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81 container secret-volume-test: STEP: delete the pod Aug 21 00:45:15.469: INFO: Waiting for pod pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81 to disappear Aug 21 00:45:15.498: INFO: Pod pod-secrets-4597e5ca-684d-4800-a11e-c9920e2a6e81 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:45:15.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-243" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1564,"failed":0} SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:45:15.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:45:15.901: INFO: Creating deployment "test-recreate-deployment" Aug 21 00:45:15.908: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 21 00:45:16.251: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 21 00:45:18.265: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 21 00:45:18.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567516, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567516, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567516, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567515, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:45:20.278: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 21 00:45:20.289: INFO: Updating deployment test-recreate-deployment Aug 21 00:45:20.289: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 21 00:45:21.020: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2734 /apis/apps/v1/namespaces/deployment-2734/deployments/test-recreate-deployment 1d6e117b-12c4-4006-9153-13aeeb2a4dde 1982491 2 2020-08-21 00:45:15 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002d13398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 00:45:20 +0000 UTC,LastTransitionTime:2020-08-21 00:45:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-21 00:45:20 +0000 UTC,LastTransitionTime:2020-08-21 00:45:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 21 00:45:21.032: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2734 /apis/apps/v1/namespaces/deployment-2734/replicasets/test-recreate-deployment-5f94c574ff ff099166-f21c-49cf-a12d-744723a3c76a 1982488 1 2020-08-21 00:45:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1d6e117b-12c4-4006-9153-13aeeb2a4dde 0x400511c3a7 0x400511c3a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400511c408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 00:45:21.032: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 21 00:45:21.033: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2734 /apis/apps/v1/namespaces/deployment-2734/replicasets/test-recreate-deployment-799c574856 3224433b-a853-43ec-a8c4-f7f6dcfb0df0 1982480 2 2020-08-21 00:45:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1d6e117b-12c4-4006-9153-13aeeb2a4dde 0x400511c477 0x400511c478}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400511c4e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 00:45:21.038: INFO: Pod "test-recreate-deployment-5f94c574ff-hvqsm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hvqsm test-recreate-deployment-5f94c574ff- deployment-2734 /api/v1/namespaces/deployment-2734/pods/test-recreate-deployment-5f94c574ff-hvqsm 8cc0c2c3-61cf-4b9f-b715-72acb2838f83 1982493 0 2020-08-21 00:45:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ff099166-f21c-49cf-a12d-744723a3c76a 0x400511c937 0x400511c938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fh48m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fh48m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fh48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:45:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:45:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:45:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 00:45:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-21 00:45:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:45:21.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2734" for this suite. • [SLOW TEST:5.572 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":105,"skipped":1566,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:45:21.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 00:45:31.335: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-4dcf72d7-cb6e-4077-b468-bc7b40f7a4aa: Get https://172.30.12.66:37695/api/v1/namespaces/dns-6419/pods/dns-test-4dcf72d7-cb6e-4077-b468-bc7b40f7a4aa/proxy/results/wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local: stream error: stream ID 6133; INTERNAL_ERROR Aug 21 00:45:31.344: INFO: Lookups using dns-6419/dns-test-4dcf72d7-cb6e-4077-b468-bc7b40f7a4aa failed for: [wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local] Aug 21 00:45:36.356: INFO: DNS probes using dns-test-4dcf72d7-cb6e-4077-b468-bc7b40f7a4aa succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 00:45:42.513: INFO: File wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:42.520: INFO: File jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:42.520: INFO: Lookups using dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 failed for: [wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local] Aug 21 00:45:47.551: INFO: File wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:47.556: INFO: File jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:47.556: INFO: Lookups using dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 failed for: [wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local] Aug 21 00:45:52.529: INFO: File wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:52.533: INFO: File jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:52.533: INFO: Lookups using dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 failed for: [wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local] Aug 21 00:45:57.529: INFO: File wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:57.534: INFO: File jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local from pod dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 21 00:45:57.534: INFO: Lookups using dns-6419/dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 failed for: [wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local] Aug 21 00:46:02.533: INFO: DNS probes using dns-test-48a087a8-45b2-40e3-a36e-9dd097501349 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6419.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6419.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 00:46:11.110: INFO: DNS probes using dns-test-675c696f-a548-4006-b747-ec476b9839b4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:46:11.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6419" for this suite. • [SLOW TEST:50.937 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":106,"skipped":1569,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:46:12.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 21 00:46:20.261: INFO: 10 pods remaining Aug 21 00:46:20.262: INFO: 9 pods has nil DeletionTimestamp Aug 21 00:46:20.262: INFO: Aug 21 00:46:20.980: INFO: 0 pods remaining Aug 21 00:46:20.980: INFO: 0 pods has nil DeletionTimestamp Aug 21 00:46:20.980: INFO: Aug 21 00:46:22.081: INFO: 0 pods remaining Aug 21 00:46:22.081: INFO: 0 pods has nil DeletionTimestamp Aug 21 00:46:22.082: INFO: Aug 21 00:46:22.813: INFO: 0 pods remaining Aug 21 00:46:22.814: INFO: 0 pods has nil DeletionTimestamp Aug 21 00:46:22.814: INFO: STEP: Gathering metrics W0821 00:46:23.964522 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 00:46:23.964: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:46:23.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2938" for this suite. • [SLOW TEST:12.120 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":107,"skipped":1574,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:46:24.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:46:38.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2982" for this suite. • [SLOW TEST:13.947 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":108,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:46:38.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:46:41.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:46:43.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:46:45.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567601, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:46:48.598: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:46:48.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1218" for this suite. STEP: Destroying namespace "webhook-1218-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.312 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":109,"skipped":1603,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:46:49.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:46:49.805: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9370 I0821 00:46:49.915040 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9370, replica count: 1 I0821 00:46:50.966391 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:46:51.967084 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:46:52.967997 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:46:53.968631 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:46:54.969394 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 00:46:55.970062 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 21 00:46:56.109: INFO: Created: latency-svc-c987w Aug 21 00:46:56.169: INFO: Got endpoints: latency-svc-c987w [96.317622ms] Aug 21 00:46:56.205: INFO: Created: latency-svc-bmb4c Aug 21 00:46:56.215: INFO: Got endpoints: latency-svc-bmb4c [44.940959ms] Aug 21 00:46:56.231: INFO: Created: latency-svc-rq4lq Aug 21 00:46:56.243: INFO: Got endpoints: latency-svc-rq4lq [72.966781ms] Aug 21 00:46:56.311: INFO: Created: latency-svc-7hf7j Aug 21 00:46:56.316: INFO: Got endpoints: latency-svc-7hf7j [145.675246ms] Aug 21 00:46:56.367: INFO: Created: latency-svc-xqwtt Aug 21 00:46:56.389: INFO: Got endpoints: latency-svc-xqwtt [218.133918ms] Aug 21 00:46:56.449: INFO: Created: latency-svc-5x4mc Aug 21 00:46:56.455: INFO: Got endpoints: latency-svc-5x4mc [284.714426ms] Aug 21 00:46:56.489: INFO: Created: latency-svc-t6rd5 Aug 21 00:46:56.519: INFO: Got endpoints: latency-svc-t6rd5 [349.220275ms] Aug 21 00:46:56.547: INFO: Created: latency-svc-s6jhk Aug 21 00:46:56.593: INFO: Got endpoints: latency-svc-s6jhk [423.031068ms] Aug 21 00:46:56.593: INFO: Created: latency-svc-fdvwc Aug 21 00:46:56.607: INFO: Got endpoints: latency-svc-fdvwc [436.526692ms] Aug 21 00:46:56.636: INFO: Created: latency-svc-z2vvn Aug 21 00:46:56.649: INFO: Got endpoints: latency-svc-z2vvn [478.469254ms] Aug 21 00:46:56.670: INFO: Created: latency-svc-5g5zw Aug 21 00:46:56.679: INFO: Got endpoints: latency-svc-5g5zw [506.706122ms] Aug 21 00:46:56.754: INFO: Created: latency-svc-m7fcg Aug 21 00:46:56.759: INFO: Got endpoints: latency-svc-m7fcg [585.996507ms] Aug 21 00:46:56.799: INFO: Created: latency-svc-fwvlz Aug 21 00:46:56.823: INFO: Got endpoints: latency-svc-fwvlz [650.840528ms] Aug 21 00:46:56.847: INFO: Created: latency-svc-fhmts Aug 21 00:46:56.893: INFO: Got endpoints: latency-svc-fhmts [719.764216ms] Aug 21 00:46:56.895: INFO: Created: latency-svc-wb6nn Aug 21 00:46:56.911: INFO: Got endpoints: latency-svc-wb6nn [738.937947ms] Aug 21 00:46:56.932: INFO: Created: latency-svc-f6qfc Aug 21 00:46:56.960: INFO: Got endpoints: latency-svc-f6qfc [787.827291ms] Aug 21 00:46:56.981: INFO: Created: latency-svc-rghxz Aug 21 00:46:57.024: INFO: Got endpoints: latency-svc-rghxz [808.565595ms] Aug 21 00:46:57.044: INFO: Created: latency-svc-rb76v Aug 21 00:46:57.056: INFO: Got endpoints: latency-svc-rb76v [813.023406ms] Aug 21 00:46:57.074: INFO: Created: latency-svc-76848 Aug 21 00:46:57.087: INFO: Got endpoints: latency-svc-76848 [771.557692ms] Aug 21 00:46:57.106: INFO: Created: latency-svc-n4mj4 Aug 21 00:46:57.118: INFO: Got endpoints: latency-svc-n4mj4 [727.979226ms] Aug 21 00:46:57.197: INFO: Created: latency-svc-p2k2t Aug 21 00:46:57.221: INFO: Got endpoints: latency-svc-p2k2t [766.236555ms] Aug 21 00:46:57.248: INFO: Created: latency-svc-lgvf6 Aug 21 00:46:57.268: INFO: Got endpoints: latency-svc-lgvf6 [748.105912ms] Aug 21 00:46:57.291: INFO: Created: latency-svc-9kl8q Aug 21 00:46:57.347: INFO: Got endpoints: latency-svc-9kl8q [753.94449ms] Aug 21 00:46:57.349: INFO: Created: latency-svc-xhpmq Aug 21 00:46:57.372: INFO: Got endpoints: latency-svc-xhpmq [765.190553ms] Aug 21 00:46:57.407: INFO: Created: latency-svc-m4xsr Aug 21 00:46:57.417: INFO: Got endpoints: latency-svc-m4xsr [767.767128ms] Aug 21 00:46:57.442: INFO: Created: latency-svc-qw4dk Aug 21 00:46:57.503: INFO: Got endpoints: latency-svc-qw4dk [823.886987ms] Aug 21 00:46:57.505: INFO: Created: latency-svc-ggz2f Aug 21 00:46:57.513: INFO: Got endpoints: latency-svc-ggz2f [753.842776ms] Aug 21 00:46:57.538: INFO: Created: latency-svc-jwcnn Aug 21 00:46:57.567: INFO: Got endpoints: latency-svc-jwcnn [742.973221ms] Aug 21 00:46:57.599: INFO: Created: latency-svc-8r6ph Aug 21 00:46:57.660: INFO: Got endpoints: latency-svc-8r6ph [766.674487ms] Aug 21 00:46:57.681: INFO: Created: latency-svc-5ldzt Aug 21 00:46:57.693: INFO: Got endpoints: latency-svc-5ldzt [781.118542ms] Aug 21 00:46:57.710: INFO: Created: latency-svc-w69xh Aug 21 00:46:57.722: INFO: Got endpoints: latency-svc-w69xh [762.201663ms] Aug 21 00:46:57.742: INFO: Created: latency-svc-jn9ng Aug 21 00:46:57.790: INFO: Got endpoints: latency-svc-jn9ng [766.017256ms] Aug 21 00:46:57.802: INFO: Created: latency-svc-mmbhz Aug 21 00:46:57.813: INFO: Got endpoints: latency-svc-mmbhz [757.070566ms] Aug 21 00:46:57.833: INFO: Created: latency-svc-lcxvz Aug 21 00:46:57.855: INFO: Got endpoints: latency-svc-lcxvz [766.890507ms] Aug 21 00:46:57.878: INFO: Created: latency-svc-8x5dd Aug 21 00:46:57.952: INFO: Got endpoints: latency-svc-8x5dd [834.6836ms] Aug 21 00:46:57.954: INFO: Created: latency-svc-fj2kl Aug 21 00:46:57.958: INFO: Got endpoints: latency-svc-fj2kl [736.212749ms] Aug 21 00:46:57.989: INFO: Created: latency-svc-zl8qq Aug 21 00:46:58.017: INFO: Got endpoints: latency-svc-zl8qq [748.907951ms] Aug 21 00:46:58.037: INFO: Created: latency-svc-6s5hz Aug 21 00:46:58.109: INFO: Got endpoints: latency-svc-6s5hz [761.568702ms] Aug 21 00:46:58.114: INFO: Created: latency-svc-xvw4m Aug 21 00:46:58.124: INFO: Got endpoints: latency-svc-xvw4m [751.374046ms] Aug 21 00:46:58.160: INFO: Created: latency-svc-h2rl8 Aug 21 00:46:58.185: INFO: Got endpoints: latency-svc-h2rl8 [768.074903ms] Aug 21 00:46:58.312: INFO: Created: latency-svc-w2924 Aug 21 00:46:58.315: INFO: Got endpoints: latency-svc-w2924 [811.232058ms] Aug 21 00:46:58.394: INFO: Created: latency-svc-9fwwv Aug 21 00:46:58.540: INFO: Got endpoints: latency-svc-9fwwv [1.026921571s] Aug 21 00:46:58.541: INFO: Created: latency-svc-4p6kg Aug 21 00:46:58.550: INFO: Got endpoints: latency-svc-4p6kg [982.93366ms] Aug 21 00:46:58.569: INFO: Created: latency-svc-hr7fs Aug 21 00:46:58.581: INFO: Got endpoints: latency-svc-hr7fs [921.40252ms] Aug 21 00:46:58.611: INFO: Created: latency-svc-ktlmw Aug 21 00:46:58.623: INFO: Got endpoints: latency-svc-ktlmw [930.019721ms] Aug 21 00:46:58.712: INFO: Created: latency-svc-qq7fp Aug 21 00:46:58.714: INFO: Got endpoints: latency-svc-qq7fp [991.747046ms] Aug 21 00:46:58.748: INFO: Created: latency-svc-s6pr7 Aug 21 00:46:58.761: INFO: Got endpoints: latency-svc-s6pr7 [971.056896ms] Aug 21 00:46:58.779: INFO: Created: latency-svc-dlx58 Aug 21 00:46:58.792: INFO: Got endpoints: latency-svc-dlx58 [978.218364ms] Aug 21 00:46:58.811: INFO: Created: latency-svc-7n5tn Aug 21 00:46:58.869: INFO: Got endpoints: latency-svc-7n5tn [1.014013609s] Aug 21 00:46:58.871: INFO: Created: latency-svc-qxhjl Aug 21 00:46:58.876: INFO: Got endpoints: latency-svc-qxhjl [923.183678ms] Aug 21 00:46:58.892: INFO: Created: latency-svc-zhs4m Aug 21 00:46:58.907: INFO: Got endpoints: latency-svc-zhs4m [949.137831ms] Aug 21 00:46:58.923: INFO: Created: latency-svc-6lxjh Aug 21 00:46:58.931: INFO: Got endpoints: latency-svc-6lxjh [913.736151ms] Aug 21 00:46:58.952: INFO: Created: latency-svc-6b6b9 Aug 21 00:46:58.967: INFO: Got endpoints: latency-svc-6b6b9 [857.858201ms] Aug 21 00:46:59.039: INFO: Created: latency-svc-9rn8m Aug 21 00:46:59.045: INFO: Got endpoints: latency-svc-9rn8m [921.546209ms] Aug 21 00:46:59.069: INFO: Created: latency-svc-k6cgv Aug 21 00:46:59.082: INFO: Got endpoints: latency-svc-k6cgv [897.339353ms] Aug 21 00:46:59.099: INFO: Created: latency-svc-jg7j2 Aug 21 00:46:59.113: INFO: Got endpoints: latency-svc-jg7j2 [797.468866ms] Aug 21 00:46:59.191: INFO: Created: latency-svc-dhddx Aug 21 00:46:59.196: INFO: Got endpoints: latency-svc-dhddx [656.109001ms] Aug 21 00:46:59.255: INFO: Created: latency-svc-rn48w Aug 21 00:46:59.272: INFO: Got endpoints: latency-svc-rn48w [722.073211ms] Aug 21 00:46:59.289: INFO: Created: latency-svc-4xfhm Aug 21 00:46:59.366: INFO: Got endpoints: latency-svc-4xfhm [784.08694ms] Aug 21 00:46:59.366: INFO: Created: latency-svc-lw9vd Aug 21 00:46:59.383: INFO: Got endpoints: latency-svc-lw9vd [760.270468ms] Aug 21 00:46:59.539: INFO: Created: latency-svc-n644f Aug 21 00:46:59.542: INFO: Got endpoints: latency-svc-n644f [827.228093ms] Aug 21 00:46:59.616: INFO: Created: latency-svc-967jc Aug 21 00:46:59.630: INFO: Got endpoints: latency-svc-967jc [867.981797ms] Aug 21 00:46:59.676: INFO: Created: latency-svc-ttvfd Aug 21 00:46:59.683: INFO: Got endpoints: latency-svc-ttvfd [891.06052ms] Aug 21 00:46:59.703: INFO: Created: latency-svc-b4db9 Aug 21 00:46:59.722: INFO: Got endpoints: latency-svc-b4db9 [852.923742ms] Aug 21 00:46:59.747: INFO: Created: latency-svc-mslrq Aug 21 00:46:59.762: INFO: Got endpoints: latency-svc-mslrq [885.560017ms] Aug 21 00:46:59.849: INFO: Created: latency-svc-n4d9v Aug 21 00:46:59.892: INFO: Created: latency-svc-q6vxv Aug 21 00:46:59.894: INFO: Got endpoints: latency-svc-n4d9v [986.553821ms] Aug 21 00:46:59.937: INFO: Got endpoints: latency-svc-q6vxv [1.005968634s] Aug 21 00:47:00.024: INFO: Created: latency-svc-58cvv Aug 21 00:47:00.053: INFO: Got endpoints: latency-svc-58cvv [1.085755368s] Aug 21 00:47:00.081: INFO: Created: latency-svc-sddcq Aug 21 00:47:00.246: INFO: Got endpoints: latency-svc-sddcq [1.200180695s] Aug 21 00:47:00.544: INFO: Created: latency-svc-pbf5w Aug 21 00:47:00.863: INFO: Got endpoints: latency-svc-pbf5w [1.78050222s] Aug 21 00:47:00.921: INFO: Created: latency-svc-lbrpg Aug 21 00:47:01.078: INFO: Got endpoints: latency-svc-lbrpg [1.965199858s] Aug 21 00:47:01.079: INFO: Created: latency-svc-v66hl Aug 21 00:47:01.095: INFO: Got endpoints: latency-svc-v66hl [1.898318561s] Aug 21 00:47:01.134: INFO: Created: latency-svc-j5bwt Aug 21 00:47:01.155: INFO: Got endpoints: latency-svc-j5bwt [1.882406992s] Aug 21 00:47:01.170: INFO: Created: latency-svc-phbr2 Aug 21 00:47:01.251: INFO: Got endpoints: latency-svc-phbr2 [1.884722239s] Aug 21 00:47:01.272: INFO: Created: latency-svc-dkt82 Aug 21 00:47:01.288: INFO: Got endpoints: latency-svc-dkt82 [1.904259046s] Aug 21 00:47:01.302: INFO: Created: latency-svc-jltrb Aug 21 00:47:01.318: INFO: Got endpoints: latency-svc-jltrb [1.776653989s] Aug 21 00:47:01.333: INFO: Created: latency-svc-kwbjg Aug 21 00:47:01.351: INFO: Got endpoints: latency-svc-kwbjg [1.721423681s] Aug 21 00:47:01.420: INFO: Created: latency-svc-qn22q Aug 21 00:47:01.451: INFO: Got endpoints: latency-svc-qn22q [1.767516702s] Aug 21 00:47:01.481: INFO: Created: latency-svc-gvlt9 Aug 21 00:47:01.495: INFO: Got endpoints: latency-svc-gvlt9 [1.772907234s] Aug 21 00:47:01.568: INFO: Created: latency-svc-j5wbp Aug 21 00:47:01.598: INFO: Created: latency-svc-wnhr2 Aug 21 00:47:01.599: INFO: Got endpoints: latency-svc-j5wbp [1.836970501s] Aug 21 00:47:01.619: INFO: Got endpoints: latency-svc-wnhr2 [1.724917724s] Aug 21 00:47:01.636: INFO: Created: latency-svc-brnxw Aug 21 00:47:01.720: INFO: Got endpoints: latency-svc-brnxw [1.782939684s] Aug 21 00:47:01.734: INFO: Created: latency-svc-vzp8z Aug 21 00:47:01.750: INFO: Got endpoints: latency-svc-vzp8z [1.696207982s] Aug 21 00:47:01.777: INFO: Created: latency-svc-8r89z Aug 21 00:47:01.786: INFO: Got endpoints: latency-svc-8r89z [1.539404167s] Aug 21 00:47:01.813: INFO: Created: latency-svc-gmm86 Aug 21 00:47:01.928: INFO: Got endpoints: latency-svc-gmm86 [1.064670576s] Aug 21 00:47:01.930: INFO: Created: latency-svc-8ph9c Aug 21 00:47:01.936: INFO: Got endpoints: latency-svc-8ph9c [857.739119ms] Aug 21 00:47:01.983: INFO: Created: latency-svc-mh2fh Aug 21 00:47:01.991: INFO: Got endpoints: latency-svc-mh2fh [896.355908ms] Aug 21 00:47:02.014: INFO: Created: latency-svc-qs4bs Aug 21 00:47:02.083: INFO: Got endpoints: latency-svc-qs4bs [928.558689ms] Aug 21 00:47:02.084: INFO: Created: latency-svc-mpkql Aug 21 00:47:02.087: INFO: Got endpoints: latency-svc-mpkql [95.633324ms] Aug 21 00:47:02.110: INFO: Created: latency-svc-6m8hd Aug 21 00:47:02.123: INFO: Got endpoints: latency-svc-6m8hd [872.291689ms] Aug 21 00:47:02.145: INFO: Created: latency-svc-s4hjs Aug 21 00:47:02.153: INFO: Got endpoints: latency-svc-s4hjs [865.360026ms] Aug 21 00:47:02.173: INFO: Created: latency-svc-crjfl Aug 21 00:47:02.247: INFO: Got endpoints: latency-svc-crjfl [928.189176ms] Aug 21 00:47:02.250: INFO: Created: latency-svc-h4cgf Aug 21 00:47:02.256: INFO: Got endpoints: latency-svc-h4cgf [904.481895ms] Aug 21 00:47:02.278: INFO: Created: latency-svc-4l5sc Aug 21 00:47:02.293: INFO: Got endpoints: latency-svc-4l5sc [841.69375ms] Aug 21 00:47:02.318: INFO: Created: latency-svc-7s9gf Aug 21 00:47:02.329: INFO: Got endpoints: latency-svc-7s9gf [833.809564ms] Aug 21 00:47:02.412: INFO: Created: latency-svc-fmv7q Aug 21 00:47:02.443: INFO: Created: latency-svc-wqnh8 Aug 21 00:47:02.444: INFO: Got endpoints: latency-svc-fmv7q [844.901634ms] Aug 21 00:47:02.456: INFO: Got endpoints: latency-svc-wqnh8 [837.043926ms] Aug 21 00:47:02.470: INFO: Created: latency-svc-v9phn Aug 21 00:47:02.493: INFO: Got endpoints: latency-svc-v9phn [772.605429ms] Aug 21 00:47:02.563: INFO: Created: latency-svc-gwdmc Aug 21 00:47:02.565: INFO: Got endpoints: latency-svc-gwdmc [815.127298ms] Aug 21 00:47:02.611: INFO: Created: latency-svc-xt2rt Aug 21 00:47:02.656: INFO: Got endpoints: latency-svc-xt2rt [870.110273ms] Aug 21 00:47:02.721: INFO: Created: latency-svc-kkrkv Aug 21 00:47:02.757: INFO: Got endpoints: latency-svc-kkrkv [828.474866ms] Aug 21 00:47:02.851: INFO: Created: latency-svc-nsmk7 Aug 21 00:47:02.852: INFO: Got endpoints: latency-svc-nsmk7 [916.360606ms] Aug 21 00:47:02.993: INFO: Created: latency-svc-wplds Aug 21 00:47:03.003: INFO: Got endpoints: latency-svc-wplds [919.1261ms] Aug 21 00:47:03.068: INFO: Created: latency-svc-r4mcq Aug 21 00:47:03.082: INFO: Got endpoints: latency-svc-r4mcq [994.208447ms] Aug 21 00:47:03.191: INFO: Created: latency-svc-9zm4p Aug 21 00:47:03.214: INFO: Got endpoints: latency-svc-9zm4p [1.09101019s] Aug 21 00:47:03.341: INFO: Created: latency-svc-dxstj Aug 21 00:47:03.341: INFO: Created: latency-svc-fp24j Aug 21 00:47:03.386: INFO: Got endpoints: latency-svc-dxstj [1.232844313s] Aug 21 00:47:03.387: INFO: Got endpoints: latency-svc-fp24j [1.13955582s] Aug 21 00:47:03.422: INFO: Created: latency-svc-8rtwf Aug 21 00:47:03.436: INFO: Got endpoints: latency-svc-8rtwf [1.1801785s] Aug 21 00:47:03.514: INFO: Created: latency-svc-br75h Aug 21 00:47:03.521: INFO: Got endpoints: latency-svc-br75h [1.227971116s] Aug 21 00:47:03.578: INFO: Created: latency-svc-b2sst Aug 21 00:47:03.712: INFO: Got endpoints: latency-svc-b2sst [1.382683528s] Aug 21 00:47:03.722: INFO: Created: latency-svc-rkp6g Aug 21 00:47:03.761: INFO: Got endpoints: latency-svc-rkp6g [1.316969681s] Aug 21 00:47:03.777: INFO: Created: latency-svc-srwfx Aug 21 00:47:03.803: INFO: Got endpoints: latency-svc-srwfx [1.346711365s] Aug 21 00:47:03.879: INFO: Created: latency-svc-hjzcw Aug 21 00:47:03.882: INFO: Got endpoints: latency-svc-hjzcw [1.389152535s] Aug 21 00:47:03.935: INFO: Created: latency-svc-dqzph Aug 21 00:47:03.979: INFO: Got endpoints: latency-svc-dqzph [1.413568969s] Aug 21 00:47:04.097: INFO: Created: latency-svc-sgjjn Aug 21 00:47:04.101: INFO: Got endpoints: latency-svc-sgjjn [1.444702045s] Aug 21 00:47:04.154: INFO: Created: latency-svc-2mkpf Aug 21 00:47:04.177: INFO: Got endpoints: latency-svc-2mkpf [1.419516188s] Aug 21 00:47:04.250: INFO: Created: latency-svc-8t2w6 Aug 21 00:47:04.581: INFO: Got endpoints: latency-svc-8t2w6 [1.728890772s] Aug 21 00:47:04.582: INFO: Created: latency-svc-zsqs2 Aug 21 00:47:04.917: INFO: Got endpoints: latency-svc-zsqs2 [1.913902473s] Aug 21 00:47:04.953: INFO: Created: latency-svc-dxm4n Aug 21 00:47:05.010: INFO: Got endpoints: latency-svc-dxm4n [1.928648811s] Aug 21 00:47:05.173: INFO: Created: latency-svc-ccpxb Aug 21 00:47:05.403: INFO: Got endpoints: latency-svc-ccpxb [2.18830277s] Aug 21 00:47:05.430: INFO: Created: latency-svc-n62rr Aug 21 00:47:05.640: INFO: Got endpoints: latency-svc-n62rr [2.253082584s] Aug 21 00:47:05.733: INFO: Created: latency-svc-bx97x Aug 21 00:47:05.880: INFO: Got endpoints: latency-svc-bx97x [2.493375046s] Aug 21 00:47:05.895: INFO: Created: latency-svc-zsx59 Aug 21 00:47:05.940: INFO: Got endpoints: latency-svc-zsx59 [2.503834905s] Aug 21 00:47:06.032: INFO: Created: latency-svc-2nh68 Aug 21 00:47:06.046: INFO: Got endpoints: latency-svc-2nh68 [2.524748003s] Aug 21 00:47:06.095: INFO: Created: latency-svc-c28tz Aug 21 00:47:06.115: INFO: Got endpoints: latency-svc-c28tz [2.40327267s] Aug 21 00:47:06.185: INFO: Created: latency-svc-wr6jz Aug 21 00:47:06.198: INFO: Got endpoints: latency-svc-wr6jz [2.436907423s] Aug 21 00:47:06.249: INFO: Created: latency-svc-b2gbg Aug 21 00:47:06.276: INFO: Got endpoints: latency-svc-b2gbg [2.472726288s] Aug 21 00:47:06.347: INFO: Created: latency-svc-nx8t4 Aug 21 00:47:06.367: INFO: Got endpoints: latency-svc-nx8t4 [2.48475734s] Aug 21 00:47:06.404: INFO: Created: latency-svc-4kdxv Aug 21 00:47:06.427: INFO: Got endpoints: latency-svc-4kdxv [2.448741049s] Aug 21 00:47:06.441: INFO: Created: latency-svc-8d87t Aug 21 00:47:06.526: INFO: Got endpoints: latency-svc-8d87t [2.425615718s] Aug 21 00:47:06.527: INFO: Created: latency-svc-spsfp Aug 21 00:47:06.535: INFO: Got endpoints: latency-svc-spsfp [2.358513426s] Aug 21 00:47:06.559: INFO: Created: latency-svc-rsnmz Aug 21 00:47:06.572: INFO: Got endpoints: latency-svc-rsnmz [1.990157191s] Aug 21 00:47:06.592: INFO: Created: latency-svc-c47d2 Aug 21 00:47:06.609: INFO: Got endpoints: latency-svc-c47d2 [1.691799603s] Aug 21 00:47:06.700: INFO: Created: latency-svc-d28hm Aug 21 00:47:06.711: INFO: Got endpoints: latency-svc-d28hm [1.699959146s] Aug 21 00:47:06.733: INFO: Created: latency-svc-752df Aug 21 00:47:06.752: INFO: Got endpoints: latency-svc-752df [1.349353686s] Aug 21 00:47:06.772: INFO: Created: latency-svc-pwm9l Aug 21 00:47:06.784: INFO: Got endpoints: latency-svc-pwm9l [1.14365553s] Aug 21 00:47:06.856: INFO: Created: latency-svc-pkfs7 Aug 21 00:47:06.939: INFO: Got endpoints: latency-svc-pkfs7 [1.059162648s] Aug 21 00:47:07.132: INFO: Created: latency-svc-x82dm Aug 21 00:47:07.384: INFO: Got endpoints: latency-svc-x82dm [1.443257034s] Aug 21 00:47:07.390: INFO: Created: latency-svc-p9glj Aug 21 00:47:07.407: INFO: Got endpoints: latency-svc-p9glj [1.36165355s] Aug 21 00:47:07.620: INFO: Created: latency-svc-5c9qc Aug 21 00:47:07.685: INFO: Got endpoints: latency-svc-5c9qc [1.569322226s] Aug 21 00:47:07.802: INFO: Created: latency-svc-42n2v Aug 21 00:47:07.815: INFO: Got endpoints: latency-svc-42n2v [1.617000135s] Aug 21 00:47:07.853: INFO: Created: latency-svc-7bs2c Aug 21 00:47:08.114: INFO: Got endpoints: latency-svc-7bs2c [1.837664112s] Aug 21 00:47:08.124: INFO: Created: latency-svc-wfkr8 Aug 21 00:47:08.167: INFO: Got endpoints: latency-svc-wfkr8 [1.800195448s] Aug 21 00:47:08.202: INFO: Created: latency-svc-7pprq Aug 21 00:47:08.604: INFO: Got endpoints: latency-svc-7pprq [2.176458117s] Aug 21 00:47:08.694: INFO: Created: latency-svc-6gvjn Aug 21 00:47:08.861: INFO: Got endpoints: latency-svc-6gvjn [2.334359358s] Aug 21 00:47:09.042: INFO: Created: latency-svc-5n9bg Aug 21 00:47:09.047: INFO: Got endpoints: latency-svc-5n9bg [2.511163022s] Aug 21 00:47:09.076: INFO: Created: latency-svc-khxrs Aug 21 00:47:09.086: INFO: Got endpoints: latency-svc-khxrs [2.514473975s] Aug 21 00:47:09.121: INFO: Created: latency-svc-ppwmz Aug 21 00:47:09.142: INFO: Got endpoints: latency-svc-ppwmz [2.532510786s] Aug 21 00:47:09.204: INFO: Created: latency-svc-lrn4t Aug 21 00:47:09.219: INFO: Got endpoints: latency-svc-lrn4t [2.507754313s] Aug 21 00:47:09.282: INFO: Created: latency-svc-n597k Aug 21 00:47:09.297: INFO: Got endpoints: latency-svc-n597k [2.544993126s] Aug 21 00:47:09.352: INFO: Created: latency-svc-k9jl5 Aug 21 00:47:09.364: INFO: Got endpoints: latency-svc-k9jl5 [2.580582247s] Aug 21 00:47:09.406: INFO: Created: latency-svc-nvcj4 Aug 21 00:47:09.424: INFO: Got endpoints: latency-svc-nvcj4 [2.48464234s] Aug 21 00:47:09.441: INFO: Created: latency-svc-cwqff Aug 21 00:47:09.534: INFO: Got endpoints: latency-svc-cwqff [2.150367755s] Aug 21 00:47:09.539: INFO: Created: latency-svc-4hnr2 Aug 21 00:47:09.550: INFO: Got endpoints: latency-svc-4hnr2 [2.142334706s] Aug 21 00:47:09.570: INFO: Created: latency-svc-fb7b9 Aug 21 00:47:09.606: INFO: Got endpoints: latency-svc-fb7b9 [1.920857147s] Aug 21 00:47:09.700: INFO: Created: latency-svc-6sq8s Aug 21 00:47:09.704: INFO: Got endpoints: latency-svc-6sq8s [1.888496554s] Aug 21 00:47:09.731: INFO: Created: latency-svc-hklpb Aug 21 00:47:09.743: INFO: Got endpoints: latency-svc-hklpb [1.628891918s] Aug 21 00:47:09.761: INFO: Created: latency-svc-wnzrc Aug 21 00:47:09.786: INFO: Got endpoints: latency-svc-wnzrc [1.618484116s] Aug 21 00:47:09.881: INFO: Created: latency-svc-2l5vq Aug 21 00:47:09.883: INFO: Got endpoints: latency-svc-2l5vq [1.279139969s] Aug 21 00:47:09.966: INFO: Created: latency-svc-7vnvv Aug 21 00:47:10.042: INFO: Got endpoints: latency-svc-7vnvv [1.181002406s] Aug 21 00:47:10.043: INFO: Created: latency-svc-wpr2f Aug 21 00:47:10.083: INFO: Got endpoints: latency-svc-wpr2f [1.036685202s] Aug 21 00:47:10.128: INFO: Created: latency-svc-spd2w Aug 21 00:47:10.269: INFO: Got endpoints: latency-svc-spd2w [1.182125592s] Aug 21 00:47:10.271: INFO: Created: latency-svc-s7hq5 Aug 21 00:47:10.297: INFO: Got endpoints: latency-svc-s7hq5 [1.155014227s] Aug 21 00:47:10.333: INFO: Created: latency-svc-8h55f Aug 21 00:47:10.345: INFO: Got endpoints: latency-svc-8h55f [1.125641655s] Aug 21 00:47:10.366: INFO: Created: latency-svc-4ffl4 Aug 21 00:47:10.450: INFO: Got endpoints: latency-svc-4ffl4 [1.151961934s] Aug 21 00:47:10.451: INFO: Created: latency-svc-mzk72 Aug 21 00:47:10.459: INFO: Got endpoints: latency-svc-mzk72 [1.094158787s] Aug 21 00:47:10.475: INFO: Created: latency-svc-jn6p4 Aug 21 00:47:10.490: INFO: Got endpoints: latency-svc-jn6p4 [1.06522327s] Aug 21 00:47:10.506: INFO: Created: latency-svc-nbqdm Aug 21 00:47:10.521: INFO: Got endpoints: latency-svc-nbqdm [986.070829ms] Aug 21 00:47:10.534: INFO: Created: latency-svc-q2r9q Aug 21 00:47:10.545: INFO: Got endpoints: latency-svc-q2r9q [994.495988ms] Aug 21 00:47:10.595: INFO: Created: latency-svc-q8psk Aug 21 00:47:10.607: INFO: Got endpoints: latency-svc-q8psk [1.000792104s] Aug 21 00:47:10.650: INFO: Created: latency-svc-hl7ww Aug 21 00:47:10.665: INFO: Got endpoints: latency-svc-hl7ww [960.457057ms] Aug 21 00:47:10.819: INFO: Created: latency-svc-hrnd9 Aug 21 00:47:10.825: INFO: Got endpoints: latency-svc-hrnd9 [1.081698403s] Aug 21 00:47:11.055: INFO: Created: latency-svc-926xb Aug 21 00:47:11.060: INFO: Got endpoints: latency-svc-926xb [1.273544933s] Aug 21 00:47:11.112: INFO: Created: latency-svc-8gfkt Aug 21 00:47:11.117: INFO: Got endpoints: latency-svc-8gfkt [1.232919602s] Aug 21 00:47:11.142: INFO: Created: latency-svc-7rwf2 Aug 21 00:47:11.209: INFO: Got endpoints: latency-svc-7rwf2 [1.166067607s] Aug 21 00:47:11.209: INFO: Created: latency-svc-jfbdh Aug 21 00:47:11.218: INFO: Got endpoints: latency-svc-jfbdh [1.134360752s] Aug 21 00:47:11.266: INFO: Created: latency-svc-8blng Aug 21 00:47:11.299: INFO: Got endpoints: latency-svc-8blng [1.029882238s] Aug 21 00:47:11.365: INFO: Created: latency-svc-sc258 Aug 21 00:47:11.370: INFO: Got endpoints: latency-svc-sc258 [1.07297153s] Aug 21 00:47:11.400: INFO: Created: latency-svc-gvsmf Aug 21 00:47:11.411: INFO: Got endpoints: latency-svc-gvsmf [1.066149838s] Aug 21 00:47:11.457: INFO: Created: latency-svc-jh4g9 Aug 21 00:47:11.526: INFO: Got endpoints: latency-svc-jh4g9 [1.0763898s] Aug 21 00:47:11.550: INFO: Created: latency-svc-fpfxk Aug 21 00:47:11.574: INFO: Got endpoints: latency-svc-fpfxk [1.115062076s] Aug 21 00:47:11.604: INFO: Created: latency-svc-xxwlg Aug 21 00:47:11.616: INFO: Got endpoints: latency-svc-xxwlg [1.125925719s] Aug 21 00:47:11.660: INFO: Created: latency-svc-pmhkp Aug 21 00:47:11.662: INFO: Got endpoints: latency-svc-pmhkp [1.140931898s] Aug 21 00:47:11.724: INFO: Created: latency-svc-lcjk7 Aug 21 00:47:11.737: INFO: Got endpoints: latency-svc-lcjk7 [1.192265038s] Aug 21 00:47:11.753: INFO: Created: latency-svc-4rpjv Aug 21 00:47:11.806: INFO: Got endpoints: latency-svc-4rpjv [1.199463267s] Aug 21 00:47:11.823: INFO: Created: latency-svc-kpg6x Aug 21 00:47:11.847: INFO: Got endpoints: latency-svc-kpg6x [1.182205263s] Aug 21 00:47:11.877: INFO: Created: latency-svc-c8f54 Aug 21 00:47:11.963: INFO: Got endpoints: latency-svc-c8f54 [1.13807123s] Aug 21 00:47:11.982: INFO: Created: latency-svc-drf7j Aug 21 00:47:11.996: INFO: Got endpoints: latency-svc-drf7j [935.987115ms] Aug 21 00:47:12.033: INFO: Created: latency-svc-crp54 Aug 21 00:47:12.056: INFO: Got endpoints: latency-svc-crp54 [938.883361ms] Aug 21 00:47:12.107: INFO: Created: latency-svc-8rr5x Aug 21 00:47:12.115: INFO: Got endpoints: latency-svc-8rr5x [906.463182ms] Aug 21 00:47:12.138: INFO: Created: latency-svc-tk5vz Aug 21 00:47:12.153: INFO: Got endpoints: latency-svc-tk5vz [935.185505ms] Aug 21 00:47:12.168: INFO: Created: latency-svc-rznn6 Aug 21 00:47:12.177: INFO: Got endpoints: latency-svc-rznn6 [877.676314ms] Aug 21 00:47:12.192: INFO: Created: latency-svc-pkqks Aug 21 00:47:12.201: INFO: Got endpoints: latency-svc-pkqks [830.902557ms] Aug 21 00:47:12.245: INFO: Created: latency-svc-f68b5 Aug 21 00:47:12.249: INFO: Got endpoints: latency-svc-f68b5 [838.129686ms] Aug 21 00:47:12.273: INFO: Created: latency-svc-5zr5l Aug 21 00:47:12.286: INFO: Got endpoints: latency-svc-5zr5l [759.469838ms] Aug 21 00:47:12.321: INFO: Created: latency-svc-mp2m9 Aug 21 00:47:12.394: INFO: Got endpoints: latency-svc-mp2m9 [820.436537ms] Aug 21 00:47:12.395: INFO: Created: latency-svc-sx9kw Aug 21 00:47:12.407: INFO: Got endpoints: latency-svc-sx9kw [790.703244ms] Aug 21 00:47:12.432: INFO: Created: latency-svc-r2l66 Aug 21 00:47:12.444: INFO: Got endpoints: latency-svc-r2l66 [781.886142ms] Aug 21 00:47:12.459: INFO: Created: latency-svc-mjp7b Aug 21 00:47:12.473: INFO: Got endpoints: latency-svc-mjp7b [735.313435ms] Aug 21 00:47:12.489: INFO: Created: latency-svc-r65v9 Aug 21 00:47:12.538: INFO: Got endpoints: latency-svc-r65v9 [731.493861ms] Aug 21 00:47:12.540: INFO: Created: latency-svc-kmznx Aug 21 00:47:12.552: INFO: Got endpoints: latency-svc-kmznx [704.266851ms] Aug 21 00:47:12.553: INFO: Latencies: [44.940959ms 72.966781ms 95.633324ms 145.675246ms 218.133918ms 284.714426ms 349.220275ms 423.031068ms 436.526692ms 478.469254ms 506.706122ms 585.996507ms 650.840528ms 656.109001ms 704.266851ms 719.764216ms 722.073211ms 727.979226ms 731.493861ms 735.313435ms 736.212749ms 738.937947ms 742.973221ms 748.105912ms 748.907951ms 751.374046ms 753.842776ms 753.94449ms 757.070566ms 759.469838ms 760.270468ms 761.568702ms 762.201663ms 765.190553ms 766.017256ms 766.236555ms 766.674487ms 766.890507ms 767.767128ms 768.074903ms 771.557692ms 772.605429ms 781.118542ms 781.886142ms 784.08694ms 787.827291ms 790.703244ms 797.468866ms 808.565595ms 811.232058ms 813.023406ms 815.127298ms 820.436537ms 823.886987ms 827.228093ms 828.474866ms 830.902557ms 833.809564ms 834.6836ms 837.043926ms 838.129686ms 841.69375ms 844.901634ms 852.923742ms 857.739119ms 857.858201ms 865.360026ms 867.981797ms 870.110273ms 872.291689ms 877.676314ms 885.560017ms 891.06052ms 896.355908ms 897.339353ms 904.481895ms 906.463182ms 913.736151ms 916.360606ms 919.1261ms 921.40252ms 921.546209ms 923.183678ms 928.189176ms 928.558689ms 930.019721ms 935.185505ms 935.987115ms 938.883361ms 949.137831ms 960.457057ms 971.056896ms 978.218364ms 982.93366ms 986.070829ms 986.553821ms 991.747046ms 994.208447ms 994.495988ms 1.000792104s 1.005968634s 1.014013609s 1.026921571s 1.029882238s 1.036685202s 1.059162648s 1.064670576s 1.06522327s 1.066149838s 1.07297153s 1.0763898s 1.081698403s 1.085755368s 1.09101019s 1.094158787s 1.115062076s 1.125641655s 1.125925719s 1.134360752s 1.13807123s 1.13955582s 1.140931898s 1.14365553s 1.151961934s 1.155014227s 1.166067607s 1.1801785s 1.181002406s 1.182125592s 1.182205263s 1.192265038s 1.199463267s 1.200180695s 1.227971116s 1.232844313s 1.232919602s 1.273544933s 1.279139969s 1.316969681s 1.346711365s 1.349353686s 1.36165355s 1.382683528s 1.389152535s 1.413568969s 1.419516188s 1.443257034s 1.444702045s 1.539404167s 1.569322226s 1.617000135s 1.618484116s 1.628891918s 1.691799603s 1.696207982s 1.699959146s 1.721423681s 1.724917724s 1.728890772s 1.767516702s 1.772907234s 1.776653989s 1.78050222s 1.782939684s 1.800195448s 1.836970501s 1.837664112s 1.882406992s 1.884722239s 1.888496554s 1.898318561s 1.904259046s 1.913902473s 1.920857147s 1.928648811s 1.965199858s 1.990157191s 2.142334706s 2.150367755s 2.176458117s 2.18830277s 2.253082584s 2.334359358s 2.358513426s 2.40327267s 2.425615718s 2.436907423s 2.448741049s 2.472726288s 2.48464234s 2.48475734s 2.493375046s 2.503834905s 2.507754313s 2.511163022s 2.514473975s 2.524748003s 2.532510786s 2.544993126s 2.580582247s] Aug 21 00:47:12.554: INFO: 50 %ile: 1.005968634s Aug 21 00:47:12.554: INFO: 90 %ile: 2.18830277s Aug 21 00:47:12.554: INFO: 99 %ile: 2.544993126s Aug 21 00:47:12.554: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:47:12.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9370" for this suite. • [SLOW TEST:23.156 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":110,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:47:12.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:47:15.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:47:17.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:47:19.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567635, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:47:23.138: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 21 00:47:24.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 21 00:47:25.138: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 21 00:47:26.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 21 00:47:27.138: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Aug 21 00:47:28.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:47:38.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2414" for this suite. STEP: Destroying namespace "webhook-2414-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.705 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":111,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:47:40.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:47:45.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:47:47.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567664, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 00:47:49.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567665, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567664, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:47:52.757: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:47:52.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3695" for this suite. STEP: Destroying namespace "webhook-3695-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.720 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":112,"skipped":1709,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:47:53.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Aug 21 00:47:53.592: INFO: Waiting up to 5m0s for pod "var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc" in namespace "var-expansion-1843" to be "success or failure" Aug 21 00:47:53.787: INFO: Pod "var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 194.690797ms Aug 21 00:47:55.792: INFO: Pod "var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199297951s Aug 21 00:47:57.798: INFO: Pod "var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205170613s STEP: Saw pod success Aug 21 00:47:57.798: INFO: Pod "var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc" satisfied condition "success or failure" Aug 21 00:47:57.801: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc container dapi-container: STEP: delete the pod Aug 21 00:47:57.885: INFO: Waiting for pod var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc to disappear Aug 21 00:47:57.901: INFO: Pod var-expansion-8d64990d-ceab-400c-89bb-14383010b8dc no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:47:57.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1843" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1709,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:47:57.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:47:58.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71" in namespace "projected-318" to be "success or failure" Aug 21 00:47:58.093: INFO: Pod "downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71": Phase="Pending", Reason="", readiness=false. Elapsed: 29.323427ms Aug 21 00:48:00.099: INFO: Pod "downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034797376s Aug 21 00:48:02.105: INFO: Pod "downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041669941s STEP: Saw pod success Aug 21 00:48:02.106: INFO: Pod "downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71" satisfied condition "success or failure" Aug 21 00:48:02.110: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71 container client-container: STEP: delete the pod Aug 21 00:48:02.139: INFO: Waiting for pod downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71 to disappear Aug 21 00:48:02.155: INFO: Pod downwardapi-volume-274613b2-e63c-45e5-8929-a90b97da9b71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:02.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-318" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1725,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:02.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:48:04.325: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:48:06.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567684, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567684, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567684, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567684, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:48:09.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4582" for this suite. STEP: Destroying namespace "webhook-4582-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.735 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":115,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:09.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5620/configmap-test-21d903c5-ba47-4bb5-b44d-59cf7c6d8b3a STEP: Creating a pod to test consume configMaps Aug 21 00:48:10.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88" in namespace "configmap-5620" to be "success or failure" Aug 21 00:48:10.808: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Pending", Reason="", readiness=false. Elapsed: 238.201317ms Aug 21 00:48:12.953: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382603882s Aug 21 00:48:15.186: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616112873s Aug 21 00:48:17.239: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.669084298s Aug 21 00:48:19.446: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875323271s Aug 21 00:48:21.452: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.881897029s STEP: Saw pod success Aug 21 00:48:21.453: INFO: Pod "pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88" satisfied condition "success or failure" Aug 21 00:48:21.457: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88 container env-test: STEP: delete the pod Aug 21 00:48:21.507: INFO: Waiting for pod pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88 to disappear Aug 21 00:48:21.528: INFO: Pod pod-configmaps-30654684-e15f-47a4-99af-bd6faf424e88 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:21.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5620" for this suite. • [SLOW TEST:11.653 seconds] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:21.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 00:48:21.688: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02" in namespace "downward-api-5178" to be "success or failure" Aug 21 00:48:21.711: INFO: Pod "downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02": Phase="Pending", Reason="", readiness=false. Elapsed: 22.123327ms Aug 21 00:48:23.736: INFO: Pod "downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047084207s Aug 21 00:48:25.742: INFO: Pod "downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053619394s STEP: Saw pod success Aug 21 00:48:25.743: INFO: Pod "downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02" satisfied condition "success or failure" Aug 21 00:48:25.748: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02 container client-container: STEP: delete the pod Aug 21 00:48:25.773: INFO: Waiting for pod downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02 to disappear Aug 21 00:48:25.778: INFO: Pod downwardapi-volume-dd93eca4-8978-4ce7-8c8d-411aba4bce02 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:25.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5178" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:25.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587 [It] should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 00:48:25.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7816' Aug 21 00:48:27.206: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 00:48:27.206: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Aug 21 00:48:27.225: INFO: scanned /root for discovery docs: Aug 21 00:48:27.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7816' Aug 21 00:48:44.377: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 21 00:48:44.378: INFO: stdout: "Created e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8\nScaling up e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Aug 21 00:48:44.378: INFO: stdout: "Created e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8\nScaling up e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Aug 21 00:48:44.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7816' Aug 21 00:48:45.634: INFO: stderr: "" Aug 21 00:48:45.634: INFO: stdout: "e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8-95g7z " Aug 21 00:48:45.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8-95g7z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7816' Aug 21 00:48:46.861: INFO: stderr: "" Aug 21 00:48:46.861: INFO: stdout: "true" Aug 21 00:48:46.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8-95g7z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7816' Aug 21 00:48:48.133: INFO: stderr: "" Aug 21 00:48:48.133: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Aug 21 00:48:48.133: INFO: e2e-test-httpd-rc-81fe9558b6e11ab7f98b695ffaa536b8-95g7z is verified up and running [AfterEach] Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593 Aug 21 00:48:48.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7816' Aug 21 00:48:49.396: INFO: stderr: "" Aug 21 00:48:49.396: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:49.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7816" for this suite. • [SLOW TEST:23.588 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":118,"skipped":1834,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:49.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 00:48:53.133: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 00:48:55.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567733, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567733, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567733, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567733, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:48:58.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:48:58.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:48:59.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2995" for this suite. STEP: Destroying namespace "webhook-2995-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":119,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:48:59.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 00:49:01.930: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 00:49:03.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567741, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567741, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567742, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733567741, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 00:49:07.001: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:49:07.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:49:08.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-491" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.683 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":120,"skipped":1861,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:49:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-78660dbe-09d4-43bd-acb9-1b95cbb82850 STEP: Creating a pod to test consume secrets Aug 21 00:49:08.552: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b" in namespace "projected-4458" to be "success or failure" Aug 21 00:49:08.557: INFO: Pod "pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.532205ms Aug 21 00:49:10.563: INFO: Pod "pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010953535s Aug 21 00:49:12.570: INFO: Pod "pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017601071s STEP: Saw pod success Aug 21 00:49:12.570: INFO: Pod "pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b" satisfied condition "success or failure" Aug 21 00:49:12.574: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b container projected-secret-volume-test: STEP: delete the pod Aug 21 00:49:12.644: INFO: Waiting for pod pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b to disappear Aug 21 00:49:12.649: INFO: Pod pod-projected-secrets-e388eacb-2ac7-435e-9f95-68096f9f1c3b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:49:12.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4458" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:49:12.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-61f8dd72-f743-4b7f-b1ff-86e84575e0be in namespace container-probe-9809 Aug 21 00:49:16.806: INFO: Started pod busybox-61f8dd72-f743-4b7f-b1ff-86e84575e0be in namespace container-probe-9809 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 00:49:16.811: INFO: Initial restart count of pod busybox-61f8dd72-f743-4b7f-b1ff-86e84575e0be is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:53:18.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9809" for this suite. • [SLOW TEST:245.774 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1900,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:53:18.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-2069 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2069 to expose endpoints map[] Aug 21 00:53:18.651: INFO: successfully validated that service multi-endpoint-test in namespace services-2069 exposes endpoints map[] (30.641468ms elapsed) STEP: Creating pod pod1 in namespace services-2069 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2069 to expose endpoints map[pod1:[100]] Aug 21 00:53:22.845: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.145809771s elapsed, will retry) Aug 21 00:53:23.870: INFO: successfully validated that service multi-endpoint-test in namespace services-2069 exposes endpoints map[pod1:[100]] (5.170537157s elapsed) STEP: Creating pod pod2 in namespace services-2069 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2069 to expose endpoints map[pod1:[100] pod2:[101]] Aug 21 00:53:28.037: INFO: successfully validated that service multi-endpoint-test in namespace services-2069 exposes endpoints map[pod1:[100] pod2:[101]] (4.160427473s elapsed) STEP: Deleting pod pod1 in namespace services-2069 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2069 to expose endpoints map[pod2:[101]] Aug 21 00:53:28.070: INFO: successfully validated that service multi-endpoint-test in namespace services-2069 exposes endpoints map[pod2:[101]] (24.36047ms elapsed) STEP: Deleting pod pod2 in namespace services-2069 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2069 to expose endpoints map[] Aug 21 00:53:28.181: INFO: successfully validated that service multi-endpoint-test in namespace services-2069 exposes endpoints map[] (106.599555ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:53:28.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2069" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.957 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":123,"skipped":1906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:53:28.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b1820fff-2704-4979-a0ad-7043419f17ae STEP: Creating secret with name s-test-opt-upd-70949691-6d09-4806-baf6-788dd2c961f5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b1820fff-2704-4979-a0ad-7043419f17ae STEP: Updating secret s-test-opt-upd-70949691-6d09-4806-baf6-788dd2c961f5 STEP: Creating secret with name s-test-opt-create-b3fd6311-120c-4b72-aca4-daeb3244e74b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:55:04.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9895" for this suite. • [SLOW TEST:95.663 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1963,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:55:04.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0df27ea0-38c3-4dc8-bf4f-43f2fa9be572 STEP: Creating a pod to test consume secrets Aug 21 00:55:04.174: INFO: Waiting up to 5m0s for pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8" in namespace "secrets-8189" to be "success or failure" Aug 21 00:55:04.207: INFO: Pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.771029ms Aug 21 00:55:06.214: INFO: Pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040072543s Aug 21 00:55:08.222: INFO: Pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.048262723s Aug 21 00:55:10.339: INFO: Pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16532877s STEP: Saw pod success Aug 21 00:55:10.340: INFO: Pod "pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8" satisfied condition "success or failure" Aug 21 00:55:10.345: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8 container secret-volume-test: STEP: delete the pod Aug 21 00:55:10.688: INFO: Waiting for pod pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8 to disappear Aug 21 00:55:10.715: INFO: Pod pod-secrets-a8624aed-1818-422d-baa1-15f7216f3cf8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:55:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8189" for this suite. • [SLOW TEST:7.297 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:55:11.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 21 00:55:12.341: INFO: Waiting up to 5m0s for pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9" in namespace "emptydir-4915" to be "success or failure" Aug 21 00:55:12.362: INFO: Pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.8372ms Aug 21 00:55:14.374: INFO: Pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032796977s Aug 21 00:55:16.379: INFO: Pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.037553367s Aug 21 00:55:18.387: INFO: Pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045026636s STEP: Saw pod success Aug 21 00:55:18.387: INFO: Pod "pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9" satisfied condition "success or failure" Aug 21 00:55:18.392: INFO: Trying to get logs from node jerma-worker pod pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9 container test-container: STEP: delete the pod Aug 21 00:55:18.418: INFO: Waiting for pod pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9 to disappear Aug 21 00:55:18.422: INFO: Pod pod-2d88d37a-b7a7-4e0f-b473-b22f9a8106c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:55:18.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4915" for this suite. • [SLOW TEST:7.066 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2018,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:55:18.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-3dca591e-1d20-48ad-9b49-a6febd62b87d STEP: Creating secret with name s-test-opt-upd-651432e7-1047-4703-87a9-122c43c612bf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3dca591e-1d20-48ad-9b49-a6febd62b87d STEP: Updating secret s-test-opt-upd-651432e7-1047-4703-87a9-122c43c612bf STEP: Creating secret with name s-test-opt-create-d9209a5a-b5e6-44f0-9ef3-31062823cc84 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:56:47.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6619" for this suite. • [SLOW TEST:88.973 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2040,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:56:47.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:56:51.916: INFO: Waiting up to 5m0s for pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b" in namespace "pods-5664" to be "success or failure" Aug 21 00:56:51.962: INFO: Pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.537572ms Aug 21 00:56:53.969: INFO: Pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052749937s Aug 21 00:56:56.094: INFO: Pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177927119s Aug 21 00:56:58.101: INFO: Pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184727341s STEP: Saw pod success Aug 21 00:56:58.101: INFO: Pod "client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b" satisfied condition "success or failure" Aug 21 00:56:58.106: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b container env3cont: STEP: delete the pod Aug 21 00:56:58.163: INFO: Waiting for pod client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b to disappear Aug 21 00:56:58.195: INFO: Pod client-envvars-3ba14b48-1b67-4064-9cb5-1814d0349b9b no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:56:58.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5664" for this suite. • [SLOW TEST:10.797 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2056,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:56:58.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Aug 21 00:56:58.281: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:58:35.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7657" for this suite. • [SLOW TEST:97.361 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":129,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:58:35.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 21 00:58:35.745: INFO: Created pod &Pod{ObjectMeta:{dns-6088 dns-6088 /api/v1/namespaces/dns-6088/pods/dns-6088 2aa4a776-62c6-4bc8-adff-fba37bfb05ba 1987418 0 2020-08-21 00:58:35 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pqbr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pqbr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pqbr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Aug 21 00:58:39.763: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6088 PodName:dns-6088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:58:39.763: INFO: >>> kubeConfig: /root/.kube/config I0821 00:58:39.832186 7 log.go:172] (0x4002b0a2c0) (0x40007fc8c0) Create stream I0821 00:58:39.832348 7 log.go:172] (0x4002b0a2c0) (0x40007fc8c0) Stream added, broadcasting: 1 I0821 00:58:39.836525 7 log.go:172] (0x4002b0a2c0) Reply frame received for 1 I0821 00:58:39.836716 7 log.go:172] (0x4002b0a2c0) (0x40007fd360) Create stream I0821 00:58:39.836864 7 log.go:172] (0x4002b0a2c0) (0x40007fd360) Stream added, broadcasting: 3 I0821 00:58:39.838112 7 log.go:172] (0x4002b0a2c0) Reply frame received for 3 I0821 00:58:39.838256 7 log.go:172] (0x4002b0a2c0) (0x4001d35040) Create stream I0821 00:58:39.838326 7 log.go:172] (0x4002b0a2c0) (0x4001d35040) Stream added, broadcasting: 5 I0821 00:58:39.839384 7 log.go:172] (0x4002b0a2c0) Reply frame received for 5 I0821 00:58:39.930204 7 log.go:172] (0x4002b0a2c0) Data frame received for 3 I0821 00:58:39.930369 7 log.go:172] (0x40007fd360) (3) Data frame handling I0821 00:58:39.930503 7 log.go:172] (0x40007fd360) (3) Data frame sent I0821 00:58:39.932695 7 log.go:172] (0x4002b0a2c0) Data frame received for 5 I0821 00:58:39.932964 7 log.go:172] (0x4001d35040) (5) Data frame handling I0821 00:58:39.933164 7 log.go:172] (0x4002b0a2c0) Data frame received for 3 I0821 00:58:39.933325 7 log.go:172] (0x40007fd360) (3) Data frame handling I0821 00:58:39.934399 7 log.go:172] (0x4002b0a2c0) Data frame received for 1 I0821 00:58:39.934477 7 log.go:172] (0x40007fc8c0) (1) Data frame handling I0821 00:58:39.934568 7 log.go:172] (0x40007fc8c0) (1) Data frame sent I0821 00:58:39.934679 7 log.go:172] (0x4002b0a2c0) (0x40007fc8c0) Stream removed, broadcasting: 1 I0821 00:58:39.934795 7 log.go:172] (0x4002b0a2c0) Go away received I0821 00:58:39.935032 7 log.go:172] (0x4002b0a2c0) (0x40007fc8c0) Stream removed, broadcasting: 1 I0821 00:58:39.935173 7 log.go:172] (0x4002b0a2c0) (0x40007fd360) Stream removed, broadcasting: 3 I0821 00:58:39.935322 7 log.go:172] (0x4002b0a2c0) (0x4001d35040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 21 00:58:39.935: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6088 PodName:dns-6088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 00:58:39.936: INFO: >>> kubeConfig: /root/.kube/config I0821 00:58:39.993039 7 log.go:172] (0x4002b0a8f0) (0x4000602820) Create stream I0821 00:58:39.993254 7 log.go:172] (0x4002b0a8f0) (0x4000602820) Stream added, broadcasting: 1 I0821 00:58:39.997684 7 log.go:172] (0x4002b0a8f0) Reply frame received for 1 I0821 00:58:39.997899 7 log.go:172] (0x4002b0a8f0) (0x4000602960) Create stream I0821 00:58:39.997997 7 log.go:172] (0x4002b0a8f0) (0x4000602960) Stream added, broadcasting: 3 I0821 00:58:39.999367 7 log.go:172] (0x4002b0a8f0) Reply frame received for 3 I0821 00:58:39.999495 7 log.go:172] (0x4002b0a8f0) (0x4000b58140) Create stream I0821 00:58:39.999568 7 log.go:172] (0x4002b0a8f0) (0x4000b58140) Stream added, broadcasting: 5 I0821 00:58:40.000849 7 log.go:172] (0x4002b0a8f0) Reply frame received for 5 I0821 00:58:40.082658 7 log.go:172] (0x4002b0a8f0) Data frame received for 3 I0821 00:58:40.082895 7 log.go:172] (0x4000602960) (3) Data frame handling I0821 00:58:40.083049 7 log.go:172] (0x4000602960) (3) Data frame sent I0821 00:58:40.084998 7 log.go:172] (0x4002b0a8f0) Data frame received for 3 I0821 00:58:40.085177 7 log.go:172] (0x4000602960) (3) Data frame handling I0821 00:58:40.085326 7 log.go:172] (0x4002b0a8f0) Data frame received for 5 I0821 00:58:40.085494 7 log.go:172] (0x4000b58140) (5) Data frame handling I0821 00:58:40.086510 7 log.go:172] (0x4002b0a8f0) Data frame received for 1 I0821 00:58:40.086661 7 log.go:172] (0x4000602820) (1) Data frame handling I0821 00:58:40.086784 7 log.go:172] (0x4000602820) (1) Data frame sent I0821 00:58:40.086902 7 log.go:172] (0x4002b0a8f0) (0x4000602820) Stream removed, broadcasting: 1 I0821 00:58:40.087093 7 log.go:172] (0x4002b0a8f0) Go away received I0821 00:58:40.087413 7 log.go:172] (0x4002b0a8f0) (0x4000602820) Stream removed, broadcasting: 1 I0821 00:58:40.087589 7 log.go:172] (0x4002b0a8f0) (0x4000602960) Stream removed, broadcasting: 3 I0821 00:58:40.087763 7 log.go:172] (0x4002b0a8f0) (0x4000b58140) Stream removed, broadcasting: 5 Aug 21 00:58:40.088: INFO: Deleting pod dns-6088... [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:58:40.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6088" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":130,"skipped":2093,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:58:40.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:58:40.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1361" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":131,"skipped":2109,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:58:40.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Aug 21 00:58:40.572: INFO: Waiting up to 5m0s for pod "client-containers-b3aecd2b-b050-465c-abbb-d739a4130398" in namespace "containers-982" to be "success or failure" Aug 21 00:58:40.586: INFO: Pod "client-containers-b3aecd2b-b050-465c-abbb-d739a4130398": Phase="Pending", Reason="", readiness=false. Elapsed: 13.715419ms Aug 21 00:58:42.599: INFO: Pod "client-containers-b3aecd2b-b050-465c-abbb-d739a4130398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026688758s Aug 21 00:58:44.604: INFO: Pod "client-containers-b3aecd2b-b050-465c-abbb-d739a4130398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03187355s STEP: Saw pod success Aug 21 00:58:44.604: INFO: Pod "client-containers-b3aecd2b-b050-465c-abbb-d739a4130398" satisfied condition "success or failure" Aug 21 00:58:44.607: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b3aecd2b-b050-465c-abbb-d739a4130398 container test-container: STEP: delete the pod Aug 21 00:58:44.680: INFO: Waiting for pod client-containers-b3aecd2b-b050-465c-abbb-d739a4130398 to disappear Aug 21 00:58:44.683: INFO: Pod client-containers-b3aecd2b-b050-465c-abbb-d739a4130398 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:58:44.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-982" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:58:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9a8885dd-9311-4a5f-9841-91df4ae6cc8b STEP: Creating a pod to test consume configMaps Aug 21 00:58:44.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42" in namespace "configmap-2548" to be "success or failure" Aug 21 00:58:44.772: INFO: Pod "pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713456ms Aug 21 00:58:46.778: INFO: Pod "pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013146807s Aug 21 00:58:48.783: INFO: Pod "pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018433922s STEP: Saw pod success Aug 21 00:58:48.784: INFO: Pod "pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42" satisfied condition "success or failure" Aug 21 00:58:48.787: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42 container configmap-volume-test: STEP: delete the pod Aug 21 00:58:48.880: INFO: Waiting for pod pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42 to disappear Aug 21 00:58:48.888: INFO: Pod pod-configmaps-b7d73550-c292-4726-b77b-80832a2d2d42 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 00:58:48.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2548" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 00:58:48.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 00:58:49.018: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:58:54.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3817" for this suite.

• [SLOW TEST:5.185 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":135,"skipped":2187,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:58:54.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:58:54.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305" in namespace "projected-17" to be "success or failure"
Aug 21 00:58:54.392: INFO: Pod "downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305": Phase="Pending", Reason="", readiness=false. Elapsed: 33.365769ms
Aug 21 00:58:56.398: INFO: Pod "downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039928662s
Aug 21 00:58:58.430: INFO: Pod "downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072270008s
STEP: Saw pod success
Aug 21 00:58:58.431: INFO: Pod "downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305" satisfied condition "success or failure"
Aug 21 00:58:58.435: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305 container client-container: 
STEP: delete the pod
Aug 21 00:58:58.456: INFO: Waiting for pod downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305 to disappear
Aug 21 00:58:58.466: INFO: Pod downwardapi-volume-84454fa5-dd9d-442b-b5ca-5669f0449305 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:58:58.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-17" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2190,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:58:58.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0821 00:59:08.667176       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 00:59:08.667: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5958" for this suite.

• [SLOW TEST:10.201 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":137,"skipped":2190,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:08.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:59:08.833: INFO: Create a RollingUpdate DaemonSet
Aug 21 00:59:08.839: INFO: Check that daemon pods launch on every node of the cluster
Aug 21 00:59:08.856: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:08.870: INFO: Number of nodes with available pods: 0
Aug 21 00:59:08.870: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 00:59:09.881: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:09.887: INFO: Number of nodes with available pods: 0
Aug 21 00:59:09.888: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 00:59:11.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:11.006: INFO: Number of nodes with available pods: 0
Aug 21 00:59:11.006: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 00:59:11.963: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:12.072: INFO: Number of nodes with available pods: 0
Aug 21 00:59:12.072: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 00:59:12.920: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:12.927: INFO: Number of nodes with available pods: 1
Aug 21 00:59:12.927: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 00:59:13.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:13.909: INFO: Number of nodes with available pods: 2
Aug 21 00:59:13.909: INFO: Number of running nodes: 2, number of available pods: 2
Aug 21 00:59:13.909: INFO: Update the DaemonSet to trigger a rollout
Aug 21 00:59:13.920: INFO: Updating DaemonSet daemon-set
Aug 21 00:59:21.986: INFO: Roll back the DaemonSet before rollout is complete
Aug 21 00:59:21.995: INFO: Updating DaemonSet daemon-set
Aug 21 00:59:21.995: INFO: Make sure DaemonSet rollback is complete
Aug 21 00:59:22.001: INFO: Wrong image for pod: daemon-set-pd4f5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 00:59:22.001: INFO: Pod daemon-set-pd4f5 is not available
Aug 21 00:59:22.010: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:23.047: INFO: Wrong image for pod: daemon-set-pd4f5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 00:59:23.047: INFO: Pod daemon-set-pd4f5 is not available
Aug 21 00:59:23.054: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:24.031: INFO: Wrong image for pod: daemon-set-pd4f5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 00:59:24.032: INFO: Pod daemon-set-pd4f5 is not available
Aug 21 00:59:24.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 00:59:25.017: INFO: Pod daemon-set-tnvhk is not available
Aug 21 00:59:25.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4858, will wait for the garbage collector to delete the pods
Aug 21 00:59:25.095: INFO: Deleting DaemonSet.extensions daemon-set took: 7.368714ms
Aug 21 00:59:25.396: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.698534ms
Aug 21 00:59:28.201: INFO: Number of nodes with available pods: 0
Aug 21 00:59:28.201: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 00:59:28.205: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4858/daemonsets","resourceVersion":"1987839"},"items":null}

Aug 21 00:59:28.209: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4858/pods","resourceVersion":"1987839"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4858" for this suite.

• [SLOW TEST:19.581 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":138,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:28.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 21 00:59:28.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 21 00:59:29.647: INFO: stderr: ""
Aug 21 00:59:29.647: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:29.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6517" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":139,"skipped":2232,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:29.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-7a0468a6-e9cb-44c4-853f-0a0ccaa7fd7c
STEP: Creating a pod to test consume secrets
Aug 21 00:59:29.733: INFO: Waiting up to 5m0s for pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf" in namespace "secrets-2360" to be "success or failure"
Aug 21 00:59:29.814: INFO: Pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 81.087531ms
Aug 21 00:59:31.844: INFO: Pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111246109s
Aug 21 00:59:33.860: INFO: Pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.127251738s
Aug 21 00:59:35.869: INFO: Pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135585154s
STEP: Saw pod success
Aug 21 00:59:35.869: INFO: Pod "pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf" satisfied condition "success or failure"
Aug 21 00:59:35.875: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf container secret-volume-test: 
STEP: delete the pod
Aug 21 00:59:35.896: INFO: Waiting for pod pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf to disappear
Aug 21 00:59:35.900: INFO: Pod pod-secrets-5391861e-a9da-4833-bb7e-c60ac497b6cf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:35.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2360" for this suite.

• [SLOW TEST:6.274 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2236,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:35.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:40.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2170" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2248,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:40.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:40.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-187" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":142,"skipped":2257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:40.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 00:59:44.497: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:44.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6327" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2281,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:44.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-49563154-6af7-4ba6-88dc-ab9c36216729
STEP: Creating a pod to test consume configMaps
Aug 21 00:59:44.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1" in namespace "configmap-1889" to be "success or failure"
Aug 21 00:59:44.681: INFO: Pod "pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.908601ms
Aug 21 00:59:46.688: INFO: Pod "pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027039403s
Aug 21 00:59:48.695: INFO: Pod "pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034100662s
STEP: Saw pod success
Aug 21 00:59:48.695: INFO: Pod "pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1" satisfied condition "success or failure"
Aug 21 00:59:48.700: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1 container configmap-volume-test: 
STEP: delete the pod
Aug 21 00:59:48.734: INFO: Waiting for pod pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1 to disappear
Aug 21 00:59:48.771: INFO: Pod pod-configmaps-d9643f86-fcc1-4eba-8021-44e9907279b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:48.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1889" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2304,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:48.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:59:48.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 21 00:59:50.072: INFO: stderr: ""
Aug 21 00:59:50.073: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:50.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-688" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":145,"skipped":2322,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:50.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 00:59:53.303: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:59:53.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7251" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2325,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:59:53.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 21 01:00:01.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 01:00:01.852: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 01:00:03.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 01:00:03.859: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 01:00:05.853: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 01:00:05.859: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:00:05.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8919" for this suite.

• [SLOW TEST:12.426 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:00:05.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-400
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-400 to expose endpoints map[]
Aug 21 01:00:06.002: INFO: successfully validated that service endpoint-test2 in namespace services-400 exposes endpoints map[] (8.84282ms elapsed)
STEP: Creating pod pod1 in namespace services-400
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-400 to expose endpoints map[pod1:[80]]
Aug 21 01:00:09.070: INFO: successfully validated that service endpoint-test2 in namespace services-400 exposes endpoints map[pod1:[80]] (3.057690602s elapsed)
STEP: Creating pod pod2 in namespace services-400
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-400 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 21 01:00:12.423: INFO: successfully validated that service endpoint-test2 in namespace services-400 exposes endpoints map[pod1:[80] pod2:[80]] (3.345668825s elapsed)
STEP: Deleting pod pod1 in namespace services-400
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-400 to expose endpoints map[pod2:[80]]
Aug 21 01:00:12.466: INFO: successfully validated that service endpoint-test2 in namespace services-400 exposes endpoints map[pod2:[80]] (35.708245ms elapsed)
STEP: Deleting pod pod2 in namespace services-400
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-400 to expose endpoints map[]
Aug 21 01:00:12.489: INFO: successfully validated that service endpoint-test2 in namespace services-400 exposes endpoints map[] (17.525575ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:00:12.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-400" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:6.999 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":148,"skipped":2391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:00:12.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8257.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8257.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8257.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8257.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8257.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8257.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 01:00:21.342: INFO: DNS probes using dns-8257/dns-test-ac08d254-da7a-4e58-8b72-d0085a315a0d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:00:21.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8257" for this suite.

• [SLOW TEST:8.534 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":149,"skipped":2430,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:00:21.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 21 01:00:21.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:02:17.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1278" for this suite.

• [SLOW TEST:115.774 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":150,"skipped":2434,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:02:17.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 21 01:02:17.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5618'
Aug 21 01:02:21.488: INFO: stderr: ""
Aug 21 01:02:21.488: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 01:02:21.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5618'
Aug 21 01:02:22.756: INFO: stderr: ""
Aug 21 01:02:22.756: INFO: stdout: "update-demo-nautilus-7jr4t update-demo-nautilus-jvbfg "
Aug 21 01:02:22.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jr4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:24.088: INFO: stderr: ""
Aug 21 01:02:24.089: INFO: stdout: ""
Aug 21 01:02:24.089: INFO: update-demo-nautilus-7jr4t is created but not running
Aug 21 01:02:29.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5618'
Aug 21 01:02:30.413: INFO: stderr: ""
Aug 21 01:02:30.413: INFO: stdout: "update-demo-nautilus-7jr4t update-demo-nautilus-jvbfg "
Aug 21 01:02:30.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jr4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:31.653: INFO: stderr: ""
Aug 21 01:02:31.654: INFO: stdout: "true"
Aug 21 01:02:31.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jr4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:32.880: INFO: stderr: ""
Aug 21 01:02:32.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:02:32.880: INFO: validating pod update-demo-nautilus-7jr4t
Aug 21 01:02:32.887: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:02:32.887: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:02:32.887: INFO: update-demo-nautilus-7jr4t is verified up and running
Aug 21 01:02:32.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:34.143: INFO: stderr: ""
Aug 21 01:02:34.144: INFO: stdout: "true"
Aug 21 01:02:34.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:35.406: INFO: stderr: ""
Aug 21 01:02:35.406: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:02:35.406: INFO: validating pod update-demo-nautilus-jvbfg
Aug 21 01:02:35.411: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:02:35.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:02:35.412: INFO: update-demo-nautilus-jvbfg is verified up and running
STEP: scaling down the replication controller
Aug 21 01:02:35.419: INFO: scanned /root for discovery docs: 
Aug 21 01:02:35.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5618'
Aug 21 01:02:36.704: INFO: stderr: ""
Aug 21 01:02:36.705: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 01:02:36.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5618'
Aug 21 01:02:38.091: INFO: stderr: ""
Aug 21 01:02:38.092: INFO: stdout: "update-demo-nautilus-7jr4t update-demo-nautilus-jvbfg "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 21 01:02:43.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5618'
Aug 21 01:02:44.348: INFO: stderr: ""
Aug 21 01:02:44.349: INFO: stdout: "update-demo-nautilus-jvbfg "
Aug 21 01:02:44.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:45.656: INFO: stderr: ""
Aug 21 01:02:45.656: INFO: stdout: "true"
Aug 21 01:02:45.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:46.908: INFO: stderr: ""
Aug 21 01:02:46.908: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:02:46.908: INFO: validating pod update-demo-nautilus-jvbfg
Aug 21 01:02:46.972: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:02:46.972: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:02:46.972: INFO: update-demo-nautilus-jvbfg is verified up and running
STEP: scaling up the replication controller
Aug 21 01:02:46.981: INFO: scanned /root for discovery docs: 
Aug 21 01:02:46.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5618'
Aug 21 01:02:48.309: INFO: stderr: ""
Aug 21 01:02:48.309: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 01:02:48.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5618'
Aug 21 01:02:49.663: INFO: stderr: ""
Aug 21 01:02:49.663: INFO: stdout: "update-demo-nautilus-jvbfg update-demo-nautilus-m4qvp "
Aug 21 01:02:49.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:50.939: INFO: stderr: ""
Aug 21 01:02:50.939: INFO: stdout: "true"
Aug 21 01:02:50.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:52.175: INFO: stderr: ""
Aug 21 01:02:52.175: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:02:52.175: INFO: validating pod update-demo-nautilus-jvbfg
Aug 21 01:02:52.181: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:02:52.181: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:02:52.181: INFO: update-demo-nautilus-jvbfg is verified up and running
Aug 21 01:02:52.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4qvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:53.466: INFO: stderr: ""
Aug 21 01:02:53.466: INFO: stdout: "true"
Aug 21 01:02:53.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4qvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5618'
Aug 21 01:02:54.704: INFO: stderr: ""
Aug 21 01:02:54.704: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:02:54.704: INFO: validating pod update-demo-nautilus-m4qvp
Aug 21 01:02:54.710: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:02:54.710: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:02:54.710: INFO: update-demo-nautilus-m4qvp is verified up and running
STEP: using delete to clean up resources
Aug 21 01:02:54.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5618'
Aug 21 01:02:55.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:02:55.957: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 21 01:02:55.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5618'
Aug 21 01:02:57.225: INFO: stderr: "No resources found in kubectl-5618 namespace.\n"
Aug 21 01:02:57.225: INFO: stdout: ""
Aug 21 01:02:57.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5618 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 01:02:58.487: INFO: stderr: ""
Aug 21 01:02:58.487: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:02:58.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5618" for this suite.

• [SLOW TEST:41.304 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":151,"skipped":2436,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:02:58.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-36 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-36;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-36 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-36;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-36.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-36.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-36.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-36.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-36.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-36.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-36.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-36.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.170.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.170.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.170.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.170.217_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-36 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-36;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-36 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-36;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-36.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-36.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-36.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-36.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-36.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-36.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-36.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-36.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-36.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-36.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.170.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.170.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.170.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.170.217_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 01:03:04.714: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.718: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.723: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.727: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.730: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.734: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.739: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.743: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.773: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.777: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.780: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.787: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.791: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:04.822: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:09.830: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.835: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.839: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.844: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.848: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.895: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.899: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.904: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.912: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:09.952: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:14.829: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.835: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.839: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.844: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.892: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.896: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.900: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.904: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.909: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.913: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.917: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:14.945: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:19.828: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.832: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.841: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.844: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.852: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.875: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.878: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.882: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.886: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.889: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.897: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.901: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:19.925: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:24.829: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.833: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.840: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.844: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.852: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.856: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.886: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.890: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.894: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.899: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.903: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.913: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.918: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:24.943: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:29.829: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.834: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.839: INFO: Unable to read wheezy_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.843: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.847: INFO: Unable to read wheezy_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.856: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.860: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.900: INFO: Unable to read jessie_udp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.929: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.934: INFO: Unable to read jessie_udp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.940: INFO: Unable to read jessie_tcp@dns-test-service.dns-36 from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.944: INFO: Unable to read jessie_udp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.948: INFO: Unable to read jessie_tcp@dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.958: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-36.svc from pod dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1: the server could not find the requested resource (get pods dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1)
Aug 21 01:03:29.985: INFO: Lookups using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-36 wheezy_tcp@dns-test-service.dns-36 wheezy_udp@dns-test-service.dns-36.svc wheezy_tcp@dns-test-service.dns-36.svc wheezy_udp@_http._tcp.dns-test-service.dns-36.svc wheezy_tcp@_http._tcp.dns-test-service.dns-36.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-36 jessie_tcp@dns-test-service.dns-36 jessie_udp@dns-test-service.dns-36.svc jessie_tcp@dns-test-service.dns-36.svc jessie_udp@_http._tcp.dns-test-service.dns-36.svc jessie_tcp@_http._tcp.dns-test-service.dns-36.svc]

Aug 21 01:03:34.927: INFO: DNS probes using dns-36/dns-test-0dfeb9d7-f09e-4087-9085-90de5e91b9a1 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:03:35.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-36" for this suite.

• [SLOW TEST:37.221 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":152,"skipped":2438,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:03:35.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:03:35.876: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9" in namespace "security-context-test-6366" to be "success or failure"
Aug 21 01:03:35.886: INFO: Pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.320193ms
Aug 21 01:03:37.984: INFO: Pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107290489s
Aug 21 01:03:39.991: INFO: Pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.114445566s
Aug 21 01:03:41.998: INFO: Pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121478044s
Aug 21 01:03:41.998: INFO: Pod "alpine-nnp-false-29b8e48f-7e96-43fe-a336-584738193bb9" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:03:42.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6366" for this suite.

• [SLOW TEST:6.328 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2466,"failed":0}
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:03:42.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cr9r7 in namespace proxy-7254
I0821 01:03:42.196175       7 runners.go:189] Created replication controller with name: proxy-service-cr9r7, namespace: proxy-7254, replica count: 1
I0821 01:03:43.247749       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 01:03:44.248641       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 01:03:45.249769       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 01:03:46.250547       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 01:03:47.251359       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 01:03:48.252173       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 01:03:49.252932       7 runners.go:189] proxy-service-cr9r7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 01:03:49.262: INFO: setup took 7.139281535s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 21 01:03:49.270: INFO: (0) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 7.058542ms)
Aug 21 01:03:49.270: INFO: (0) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 7.233142ms)
Aug 21 01:03:49.270: INFO: (0) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 6.735278ms)
Aug 21 01:03:49.274: INFO: (0) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 10.744412ms)
Aug 21 01:03:49.274: INFO: (0) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 10.749178ms)
Aug 21 01:03:49.275: INFO: (0) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 11.13859ms)
Aug 21 01:03:49.275: INFO: (0) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 11.423751ms)
Aug 21 01:03:49.275: INFO: (0) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 11.960476ms)
Aug 21 01:03:49.275: INFO: (0) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 11.662565ms)
Aug 21 01:03:49.275: INFO: (0) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 11.864024ms)
Aug 21 01:03:49.277: INFO: (0) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 13.226274ms)
Aug 21 01:03:49.280: INFO: (0) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 4.584371ms)
Aug 21 01:03:49.285: INFO: (1) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.039723ms)
Aug 21 01:03:49.286: INFO: (1) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.219341ms)
Aug 21 01:03:49.286: INFO: (1) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 5.357794ms)
Aug 21 01:03:49.289: INFO: (1) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 9.027416ms)
Aug 21 01:03:49.290: INFO: (1) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 9.453344ms)
Aug 21 01:03:49.290: INFO: (1) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 10.111871ms)
Aug 21 01:03:49.291: INFO: (1) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 10.04715ms)
Aug 21 01:03:49.291: INFO: (1) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 10.187636ms)
Aug 21 01:03:49.291: INFO: (1) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 10.680583ms)
Aug 21 01:03:49.295: INFO: (2) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 4.208043ms)
Aug 21 01:03:49.296: INFO: (2) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.087601ms)
Aug 21 01:03:49.296: INFO: (2) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 4.969137ms)
Aug 21 01:03:49.297: INFO: (2) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.808928ms)
Aug 21 01:03:49.297: INFO: (2) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 6.020953ms)
Aug 21 01:03:49.297: INFO: (2) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.939238ms)
Aug 21 01:03:49.297: INFO: (2) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 6.197674ms)
Aug 21 01:03:49.297: INFO: (2) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 6.056796ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 6.40508ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 6.695724ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 6.801145ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 7.041603ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 6.885891ms)
Aug 21 01:03:49.298: INFO: (2) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 5.009188ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 5.660332ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.622355ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 5.958556ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.936551ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 6.478824ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 6.206734ms)
Aug 21 01:03:49.306: INFO: (3) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 6.612517ms)
Aug 21 01:03:49.307: INFO: (3) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 6.633102ms)
Aug 21 01:03:49.307: INFO: (3) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 6.992499ms)
Aug 21 01:03:49.307: INFO: (3) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 7.128621ms)
Aug 21 01:03:49.307: INFO: (3) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 7.175015ms)
Aug 21 01:03:49.312: INFO: (4) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 4.133846ms)
Aug 21 01:03:49.312: INFO: (4) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 3.940786ms)
Aug 21 01:03:49.312: INFO: (4) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 4.299056ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 5.289829ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.496934ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 5.555271ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.63166ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 5.809081ms)
Aug 21 01:03:49.313: INFO: (4) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 3.506098ms)
Aug 21 01:03:49.319: INFO: (5) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 3.782359ms)
Aug 21 01:03:49.322: INFO: (5) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 6.812703ms)
Aug 21 01:03:49.322: INFO: (5) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 7.600179ms)
Aug 21 01:03:49.323: INFO: (5) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 7.699841ms)
Aug 21 01:03:49.323: INFO: (5) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 7.672994ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 8.233572ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 8.314509ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 8.236203ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 8.303394ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 8.512777ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 8.790869ms)
Aug 21 01:03:49.324: INFO: (5) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 8.817046ms)
Aug 21 01:03:49.329: INFO: (6) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 4.423656ms)
Aug 21 01:03:49.329: INFO: (6) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 4.512746ms)
Aug 21 01:03:49.329: INFO: (6) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 4.714689ms)
Aug 21 01:03:49.329: INFO: (6) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 4.765485ms)
Aug 21 01:03:49.329: INFO: (6) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 5.148451ms)
Aug 21 01:03:49.330: INFO: (6) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 4.963616ms)
Aug 21 01:03:49.346: INFO: (7) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 5.408595ms)
Aug 21 01:03:49.346: INFO: (7) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.847697ms)
Aug 21 01:03:49.347: INFO: (7) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 7.023689ms)
Aug 21 01:03:49.348: INFO: (7) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 6.902344ms)
Aug 21 01:03:49.348: INFO: (7) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 7.001554ms)
Aug 21 01:03:49.348: INFO: (7) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 7.156445ms)
Aug 21 01:03:49.348: INFO: (7) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 7.231321ms)
Aug 21 01:03:49.348: INFO: (7) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 7.287504ms)
Aug 21 01:03:49.349: INFO: (7) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 7.960883ms)
Aug 21 01:03:49.354: INFO: (8) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.373427ms)
Aug 21 01:03:49.355: INFO: (8) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 5.633663ms)
Aug 21 01:03:49.355: INFO: (8) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.683999ms)
Aug 21 01:03:49.355: INFO: (8) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.750357ms)
Aug 21 01:03:49.355: INFO: (8) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.912614ms)
Aug 21 01:03:49.355: INFO: (8) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 6.569813ms)
Aug 21 01:03:49.356: INFO: (8) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 6.938954ms)
Aug 21 01:03:49.356: INFO: (8) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 7.423636ms)
Aug 21 01:03:49.357: INFO: (8) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 6.921262ms)
Aug 21 01:03:49.357: INFO: (8) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 7.735279ms)
Aug 21 01:03:49.360: INFO: (9) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 3.017341ms)
Aug 21 01:03:49.360: INFO: (9) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 3.099637ms)
Aug 21 01:03:49.361: INFO: (9) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 3.788323ms)
Aug 21 01:03:49.361: INFO: (9) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 4.317419ms)
Aug 21 01:03:49.362: INFO: (9) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 4.672929ms)
Aug 21 01:03:49.362: INFO: (9) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 4.759691ms)
Aug 21 01:03:49.362: INFO: (9) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 4.903378ms)
Aug 21 01:03:49.362: INFO: (9) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 5.379729ms)
Aug 21 01:03:49.362: INFO: (9) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 5.123111ms)
Aug 21 01:03:49.363: INFO: (9) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 6.066277ms)
Aug 21 01:03:49.363: INFO: (9) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 6.174727ms)
Aug 21 01:03:49.364: INFO: (9) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 6.90972ms)
Aug 21 01:03:49.364: INFO: (9) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 7.047113ms)
Aug 21 01:03:49.364: INFO: (9) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 6.938051ms)
Aug 21 01:03:49.368: INFO: (10) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 3.250006ms)
Aug 21 01:03:49.369: INFO: (10) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 4.823952ms)
Aug 21 01:03:49.369: INFO: (10) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 4.70464ms)
Aug 21 01:03:49.369: INFO: (10) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 4.844912ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 5.227258ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 4.914738ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.306981ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 5.078381ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.207755ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.659288ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.581826ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.996609ms)
Aug 21 01:03:49.370: INFO: (10) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.789565ms)
Aug 21 01:03:49.371: INFO: (10) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 5.848468ms)
Aug 21 01:03:49.371: INFO: (10) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 3.914107ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 4.309528ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 4.750635ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 4.948667ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 4.950824ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 5.131735ms)
Aug 21 01:03:49.376: INFO: (11) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.332766ms)
Aug 21 01:03:49.377: INFO: (11) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 5.329502ms)
Aug 21 01:03:49.377: INFO: (11) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.394505ms)
Aug 21 01:03:49.377: INFO: (11) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.452371ms)
Aug 21 01:03:49.377: INFO: (11) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 5.616968ms)
Aug 21 01:03:49.377: INFO: (11) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 5.919606ms)
Aug 21 01:03:49.378: INFO: (11) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 6.248016ms)
Aug 21 01:03:49.378: INFO: (11) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 3.372728ms)
Aug 21 01:03:49.383: INFO: (12) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 3.630915ms)
Aug 21 01:03:49.383: INFO: (12) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 5.471371ms)
Aug 21 01:03:49.385: INFO: (12) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.379688ms)
Aug 21 01:03:49.385: INFO: (12) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 5.38072ms)
Aug 21 01:03:49.385: INFO: (12) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 5.565572ms)
Aug 21 01:03:49.385: INFO: (12) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.692428ms)
Aug 21 01:03:49.385: INFO: (12) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.639262ms)
Aug 21 01:03:49.386: INFO: (12) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 5.743729ms)
Aug 21 01:03:49.386: INFO: (12) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 6.017388ms)
Aug 21 01:03:49.390: INFO: (13) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 3.883249ms)
Aug 21 01:03:49.390: INFO: (13) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 4.376419ms)
Aug 21 01:03:49.392: INFO: (13) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 6.456313ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 6.219934ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 6.601695ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 6.57612ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 6.57042ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 6.731898ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 6.997501ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 6.990181ms)
Aug 21 01:03:49.393: INFO: (13) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test<... (200; 3.593251ms)
Aug 21 01:03:49.399: INFO: (14) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 4.091461ms)
Aug 21 01:03:49.399: INFO: (14) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 4.332985ms)
Aug 21 01:03:49.399: INFO: (14) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 4.357891ms)
Aug 21 01:03:49.399: INFO: (14) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 4.57702ms)
Aug 21 01:03:49.399: INFO: (14) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 4.789476ms)
Aug 21 01:03:49.400: INFO: (14) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.277246ms)
Aug 21 01:03:49.400: INFO: (14) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 5.089116ms)
Aug 21 01:03:49.400: INFO: (14) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.077083ms)
Aug 21 01:03:49.401: INFO: (14) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 6.273089ms)
Aug 21 01:03:49.401: INFO: (14) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 6.095508ms)
Aug 21 01:03:49.401: INFO: (14) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 6.279379ms)
Aug 21 01:03:49.401: INFO: (14) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 6.244703ms)
Aug 21 01:03:49.404: INFO: (14) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 9.01475ms)
Aug 21 01:03:49.407: INFO: (15) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 2.756907ms)
Aug 21 01:03:49.408: INFO: (15) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 3.604719ms)
Aug 21 01:03:49.408: INFO: (15) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 4.089759ms)
Aug 21 01:03:49.409: INFO: (15) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 4.266864ms)
Aug 21 01:03:49.409: INFO: (15) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 4.446343ms)
Aug 21 01:03:49.409: INFO: (15) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 4.513056ms)
Aug 21 01:03:49.409: INFO: (15) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 4.902982ms)
Aug 21 01:03:49.410: INFO: (15) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 5.14357ms)
Aug 21 01:03:49.410: INFO: (15) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.387926ms)
Aug 21 01:03:49.410: INFO: (15) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 5.579865ms)
Aug 21 01:03:49.410: INFO: (15) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 5.219508ms)
Aug 21 01:03:49.410: INFO: (15) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 6.374786ms)
Aug 21 01:03:49.411: INFO: (15) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 6.538232ms)
Aug 21 01:03:49.414: INFO: (16) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 2.59812ms)
Aug 21 01:03:49.416: INFO: (16) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 4.643832ms)
Aug 21 01:03:49.416: INFO: (16) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 4.80932ms)
Aug 21 01:03:49.416: INFO: (16) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 5.045979ms)
Aug 21 01:03:49.417: INFO: (16) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 5.380244ms)
Aug 21 01:03:49.417: INFO: (16) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 5.39472ms)
Aug 21 01:03:49.417: INFO: (16) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 5.467288ms)
Aug 21 01:03:49.417: INFO: (16) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.50329ms)
Aug 21 01:03:49.418: INFO: (16) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 6.456679ms)
Aug 21 01:03:49.418: INFO: (16) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 6.765713ms)
Aug 21 01:03:49.418: INFO: (16) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 6.898495ms)
Aug 21 01:03:49.419: INFO: (16) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 7.054045ms)
Aug 21 01:03:49.419: INFO: (16) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 7.413395ms)
Aug 21 01:03:49.419: INFO: (16) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 7.016541ms)
Aug 21 01:03:49.422: INFO: (17) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: test (200; 5.038482ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 5.148289ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.350685ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 5.563442ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.108321ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 6.140998ms)
Aug 21 01:03:49.425: INFO: (17) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 6.395966ms)
Aug 21 01:03:49.426: INFO: (17) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 6.701164ms)
Aug 21 01:03:49.426: INFO: (17) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:1080/proxy/: ... (200; 6.015366ms)
Aug 21 01:03:49.429: INFO: (18) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 3.320554ms)
Aug 21 01:03:49.429: INFO: (18) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 3.378108ms)
Aug 21 01:03:49.431: INFO: (18) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 4.592308ms)
Aug 21 01:03:49.431: INFO: (18) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 5.425525ms)
Aug 21 01:03:49.431: INFO: (18) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 4.892466ms)
Aug 21 01:03:49.431: INFO: (18) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname2/proxy/: bar (200; 5.496571ms)
Aug 21 01:03:49.432: INFO: (18) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 5.610024ms)
Aug 21 01:03:49.432: INFO: (18) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname2/proxy/: tls qux (200; 5.738097ms)
Aug 21 01:03:49.432: INFO: (18) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 5.613175ms)
Aug 21 01:03:49.432: INFO: (18) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 6.914943ms)
Aug 21 01:03:49.434: INFO: (18) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 7.666047ms)
Aug 21 01:03:49.434: INFO: (18) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 7.358122ms)
Aug 21 01:03:49.437: INFO: (19) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:160/proxy/: foo (200; 3.188887ms)
Aug 21 01:03:49.437: INFO: (19) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:462/proxy/: tls qux (200; 3.268694ms)
Aug 21 01:03:49.439: INFO: (19) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:443/proxy/: ... (200; 5.297047ms)
Aug 21 01:03:49.439: INFO: (19) /api/v1/namespaces/proxy-7254/pods/https:proxy-service-cr9r7-bcz6l:460/proxy/: tls baz (200; 5.613134ms)
Aug 21 01:03:49.440: INFO: (19) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname1/proxy/: foo (200; 5.752149ms)
Aug 21 01:03:49.440: INFO: (19) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:1080/proxy/: test<... (200; 5.997428ms)
Aug 21 01:03:49.440: INFO: (19) /api/v1/namespaces/proxy-7254/services/https:proxy-service-cr9r7:tlsportname1/proxy/: tls baz (200; 6.050544ms)
Aug 21 01:03:49.440: INFO: (19) /api/v1/namespaces/proxy-7254/pods/http:proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 6.1617ms)
Aug 21 01:03:49.441: INFO: (19) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l/proxy/: test (200; 6.391293ms)
Aug 21 01:03:49.441: INFO: (19) /api/v1/namespaces/proxy-7254/pods/proxy-service-cr9r7-bcz6l:162/proxy/: bar (200; 6.504256ms)
Aug 21 01:03:49.441: INFO: (19) /api/v1/namespaces/proxy-7254/services/proxy-service-cr9r7:portname2/proxy/: bar (200; 6.602819ms)
Aug 21 01:03:49.441: INFO: (19) /api/v1/namespaces/proxy-7254/services/http:proxy-service-cr9r7:portname1/proxy/: foo (200; 7.132083ms)
STEP: deleting ReplicationController proxy-service-cr9r7 in namespace proxy-7254, will wait for the garbage collector to delete the pods
Aug 21 01:03:49.504: INFO: Deleting ReplicationController proxy-service-cr9r7 took: 9.217476ms
Aug 21 01:03:49.804: INFO: Terminating ReplicationController proxy-service-cr9r7 pods took: 300.787432ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:04:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7254" for this suite.

• [SLOW TEST:19.566 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":154,"skipped":2466,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:04:01.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:04:01.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b" in namespace "downward-api-7936" to be "success or failure"
Aug 21 01:04:01.713: INFO: Pod "downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423921ms
Aug 21 01:04:03.720: INFO: Pod "downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01337562s
Aug 21 01:04:05.726: INFO: Pod "downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019486379s
STEP: Saw pod success
Aug 21 01:04:05.726: INFO: Pod "downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b" satisfied condition "success or failure"
Aug 21 01:04:05.730: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b container client-container: 
STEP: delete the pod
Aug 21 01:04:05.785: INFO: Waiting for pod downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b to disappear
Aug 21 01:04:05.790: INFO: Pod downwardapi-volume-b6d4a98a-bfd7-48c9-b501-d7ce38ca603b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:04:05.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7936" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2482,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:04:05.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e364c551-3b4e-4796-80fe-67db42d51537
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e364c551-3b4e-4796-80fe-67db42d51537
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:22.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3268" for this suite.

• [SLOW TEST:76.744 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2497,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:22.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:29.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8755" for this suite.

• [SLOW TEST:7.139 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":157,"skipped":2515,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:29.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 21 01:05:34.390: INFO: Successfully updated pod "annotationupdate3e1c1154-e4a4-4105-85e5-fda535be7322"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:36.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8699" for this suite.

• [SLOW TEST:6.744 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2574,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:36.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 01:05:41.072: INFO: Successfully updated pod "pod-update-activedeadlineseconds-23bc440e-98b4-4753-a5da-2a14a66cfc2f"
Aug 21 01:05:41.073: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-23bc440e-98b4-4753-a5da-2a14a66cfc2f" in namespace "pods-4883" to be "terminated due to deadline exceeded"
Aug 21 01:05:41.108: INFO: Pod "pod-update-activedeadlineseconds-23bc440e-98b4-4753-a5da-2a14a66cfc2f": Phase="Running", Reason="", readiness=true. Elapsed: 35.391728ms
Aug 21 01:05:43.115: INFO: Pod "pod-update-activedeadlineseconds-23bc440e-98b4-4753-a5da-2a14a66cfc2f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.0418806s
Aug 21 01:05:43.115: INFO: Pod "pod-update-activedeadlineseconds-23bc440e-98b4-4753-a5da-2a14a66cfc2f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:43.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4883" for this suite.

• [SLOW TEST:6.691 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2575,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:43.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 01:05:43.224: INFO: Waiting up to 5m0s for pod "pod-a9751380-9651-4c9f-ad36-558c99319f42" in namespace "emptydir-9991" to be "success or failure"
Aug 21 01:05:43.234: INFO: Pod "pod-a9751380-9651-4c9f-ad36-558c99319f42": Phase="Pending", Reason="", readiness=false. Elapsed: 9.493961ms
Aug 21 01:05:45.266: INFO: Pod "pod-a9751380-9651-4c9f-ad36-558c99319f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041585219s
Aug 21 01:05:47.337: INFO: Pod "pod-a9751380-9651-4c9f-ad36-558c99319f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112095491s
STEP: Saw pod success
Aug 21 01:05:47.337: INFO: Pod "pod-a9751380-9651-4c9f-ad36-558c99319f42" satisfied condition "success or failure"
Aug 21 01:05:47.348: INFO: Trying to get logs from node jerma-worker pod pod-a9751380-9651-4c9f-ad36-558c99319f42 container test-container: 
STEP: delete the pod
Aug 21 01:05:47.393: INFO: Waiting for pod pod-a9751380-9651-4c9f-ad36-558c99319f42 to disappear
Aug 21 01:05:47.397: INFO: Pod pod-a9751380-9651-4c9f-ad36-558c99319f42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:47.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9991" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2591,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:47.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5258" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2606,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:51.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:05:51.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92" in namespace "downward-api-4422" to be "success or failure"
Aug 21 01:05:51.734: INFO: Pod "downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92": Phase="Pending", Reason="", readiness=false. Elapsed: 32.524148ms
Aug 21 01:05:53.763: INFO: Pod "downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061703097s
Aug 21 01:05:55.770: INFO: Pod "downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069133836s
STEP: Saw pod success
Aug 21 01:05:55.770: INFO: Pod "downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92" satisfied condition "success or failure"
Aug 21 01:05:55.776: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92 container client-container: 
STEP: delete the pod
Aug 21 01:05:55.822: INFO: Waiting for pod downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92 to disappear
Aug 21 01:05:55.828: INFO: Pod downwardapi-volume-3e84081f-6974-4ca9-b876-1298dc347b92 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:05:55.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4422" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2620,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:05:55.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:05:55.931: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 21 01:05:55.947: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 21 01:06:00.954: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 01:06:00.955: INFO: Creating deployment "test-rolling-update-deployment"
Aug 21 01:06:00.977: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 21 01:06:00.994: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 21 01:06:03.006: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 21 01:06:03.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733568761, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733568761, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733568761, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733568761, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:06:05.016: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 21 01:06:05.032: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8053 /apis/apps/v1/namespaces/deployment-8053/deployments/test-rolling-update-deployment 67ab2884-f401-4394-b33e-a2b33cc69eb2 1989784 1 2020-08-21 01:06:00 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40047b2ac8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 01:06:01 +0000 UTC,LastTransitionTime:2020-08-21 01:06:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-21 01:06:04 +0000 UTC,LastTransitionTime:2020-08-21 01:06:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 01:06:05.038: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-8053 /apis/apps/v1/namespaces/deployment-8053/replicasets/test-rolling-update-deployment-67cf4f6444 e3768cd9-d2cb-42c8-8511-dd7019e7ada6 1989773 1 2020-08-21 01:06:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 67ab2884-f401-4394-b33e-a2b33cc69eb2 0x400475b707 0x400475b708}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400475b778  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:06:05.038: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 21 01:06:05.039: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8053 /apis/apps/v1/namespaces/deployment-8053/replicasets/test-rolling-update-controller 08ac5971-4d40-4790-b058-58d7bd18749a 1989783 2 2020-08-21 01:05:55 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 67ab2884-f401-4394-b33e-a2b33cc69eb2 0x400475b637 0x400475b638}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400475b698  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:06:05.046: INFO: Pod "test-rolling-update-deployment-67cf4f6444-2p2t6" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-2p2t6 test-rolling-update-deployment-67cf4f6444- deployment-8053 /api/v1/namespaces/deployment-8053/pods/test-rolling-update-deployment-67cf4f6444-2p2t6 6c8e063d-da0a-4f4b-b4be-fa359e90668a 1989772 0 2020-08-21 01:06:01 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e3768cd9-d2cb-42c8-8511-dd7019e7ada6 0x400475bbb7 0x400475bbb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wl75m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wl75m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wl75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:06:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.44,StartTime:2020-08-21 01:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 01:06:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://5af3ab8baf2f8568482ff3dca42d62ac49aaefc3a38be8c3f6d341880d61e623,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:06:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8053" for this suite.

• [SLOW TEST:9.188 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":163,"skipped":2632,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:06:05.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4648
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 01:06:05.178: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 01:06:29.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.55:8080/dial?request=hostname&protocol=http&host=10.244.2.45&port=8080&tries=1'] Namespace:pod-network-test-4648 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 01:06:29.313: INFO: >>> kubeConfig: /root/.kube/config
I0821 01:06:29.374570       7 log.go:172] (0x40031322c0) (0x4000d07680) Create stream
I0821 01:06:29.374744       7 log.go:172] (0x40031322c0) (0x4000d07680) Stream added, broadcasting: 1
I0821 01:06:29.378466       7 log.go:172] (0x40031322c0) Reply frame received for 1
I0821 01:06:29.378637       7 log.go:172] (0x40031322c0) (0x40016d6460) Create stream
I0821 01:06:29.378715       7 log.go:172] (0x40031322c0) (0x40016d6460) Stream added, broadcasting: 3
I0821 01:06:29.380109       7 log.go:172] (0x40031322c0) Reply frame received for 3
I0821 01:06:29.380280       7 log.go:172] (0x40031322c0) (0x4000d07720) Create stream
I0821 01:06:29.380380       7 log.go:172] (0x40031322c0) (0x4000d07720) Stream added, broadcasting: 5
I0821 01:06:29.382015       7 log.go:172] (0x40031322c0) Reply frame received for 5
I0821 01:06:29.465751       7 log.go:172] (0x40031322c0) Data frame received for 3
I0821 01:06:29.465967       7 log.go:172] (0x40031322c0) Data frame received for 5
I0821 01:06:29.466123       7 log.go:172] (0x4000d07720) (5) Data frame handling
I0821 01:06:29.466275       7 log.go:172] (0x40016d6460) (3) Data frame handling
I0821 01:06:29.466444       7 log.go:172] (0x40016d6460) (3) Data frame sent
I0821 01:06:29.466588       7 log.go:172] (0x40031322c0) Data frame received for 3
I0821 01:06:29.466679       7 log.go:172] (0x40016d6460) (3) Data frame handling
I0821 01:06:29.468085       7 log.go:172] (0x40031322c0) Data frame received for 1
I0821 01:06:29.468163       7 log.go:172] (0x4000d07680) (1) Data frame handling
I0821 01:06:29.468235       7 log.go:172] (0x4000d07680) (1) Data frame sent
I0821 01:06:29.468318       7 log.go:172] (0x40031322c0) (0x4000d07680) Stream removed, broadcasting: 1
I0821 01:06:29.468423       7 log.go:172] (0x40031322c0) Go away received
I0821 01:06:29.468980       7 log.go:172] (0x40031322c0) (0x4000d07680) Stream removed, broadcasting: 1
I0821 01:06:29.469106       7 log.go:172] (0x40031322c0) (0x40016d6460) Stream removed, broadcasting: 3
I0821 01:06:29.469207       7 log.go:172] (0x40031322c0) (0x4000d07720) Stream removed, broadcasting: 5
Aug 21 01:06:29.469: INFO: Waiting for responses: map[]
Aug 21 01:06:29.474: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.55:8080/dial?request=hostname&protocol=http&host=10.244.1.54&port=8080&tries=1'] Namespace:pod-network-test-4648 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 01:06:29.474: INFO: >>> kubeConfig: /root/.kube/config
I0821 01:06:29.534487       7 log.go:172] (0x4002ed8370) (0x4000edac80) Create stream
I0821 01:06:29.534719       7 log.go:172] (0x4002ed8370) (0x4000edac80) Stream added, broadcasting: 1
I0821 01:06:29.538612       7 log.go:172] (0x4002ed8370) Reply frame received for 1
I0821 01:06:29.538753       7 log.go:172] (0x4002ed8370) (0x4002264780) Create stream
I0821 01:06:29.538819       7 log.go:172] (0x4002ed8370) (0x4002264780) Stream added, broadcasting: 3
I0821 01:06:29.540069       7 log.go:172] (0x4002ed8370) Reply frame received for 3
I0821 01:06:29.540220       7 log.go:172] (0x4002ed8370) (0x400136e460) Create stream
I0821 01:06:29.540293       7 log.go:172] (0x4002ed8370) (0x400136e460) Stream added, broadcasting: 5
I0821 01:06:29.541904       7 log.go:172] (0x4002ed8370) Reply frame received for 5
I0821 01:06:29.614939       7 log.go:172] (0x4002ed8370) Data frame received for 3
I0821 01:06:29.615142       7 log.go:172] (0x4002264780) (3) Data frame handling
I0821 01:06:29.615323       7 log.go:172] (0x4002264780) (3) Data frame sent
I0821 01:06:29.615459       7 log.go:172] (0x4002ed8370) Data frame received for 3
I0821 01:06:29.615590       7 log.go:172] (0x4002264780) (3) Data frame handling
I0821 01:06:29.615675       7 log.go:172] (0x4002ed8370) Data frame received for 5
I0821 01:06:29.615765       7 log.go:172] (0x400136e460) (5) Data frame handling
I0821 01:06:29.616534       7 log.go:172] (0x4002ed8370) Data frame received for 1
I0821 01:06:29.616619       7 log.go:172] (0x4000edac80) (1) Data frame handling
I0821 01:06:29.616681       7 log.go:172] (0x4000edac80) (1) Data frame sent
I0821 01:06:29.616855       7 log.go:172] (0x4002ed8370) (0x4000edac80) Stream removed, broadcasting: 1
I0821 01:06:29.616946       7 log.go:172] (0x4002ed8370) Go away received
I0821 01:06:29.617266       7 log.go:172] (0x4002ed8370) (0x4000edac80) Stream removed, broadcasting: 1
I0821 01:06:29.617362       7 log.go:172] (0x4002ed8370) (0x4002264780) Stream removed, broadcasting: 3
I0821 01:06:29.617434       7 log.go:172] (0x4002ed8370) (0x400136e460) Stream removed, broadcasting: 5
Aug 21 01:06:29.617: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:06:29.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4648" for this suite.

• [SLOW TEST:24.569 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2653,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:06:29.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:06:29.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 01:06:49.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1457 create -f -'
Aug 21 01:06:56.431: INFO: stderr: ""
Aug 21 01:06:56.432: INFO: stdout: "e2e-test-crd-publish-openapi-501-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 01:06:56.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1457 delete e2e-test-crd-publish-openapi-501-crds test-cr'
Aug 21 01:06:57.687: INFO: stderr: ""
Aug 21 01:06:57.687: INFO: stdout: "e2e-test-crd-publish-openapi-501-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 21 01:06:57.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1457 apply -f -'
Aug 21 01:06:59.293: INFO: stderr: ""
Aug 21 01:06:59.293: INFO: stdout: "e2e-test-crd-publish-openapi-501-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 01:06:59.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1457 delete e2e-test-crd-publish-openapi-501-crds test-cr'
Aug 21 01:07:00.539: INFO: stderr: ""
Aug 21 01:07:00.540: INFO: stdout: "e2e-test-crd-publish-openapi-501-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 01:07:00.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-501-crds'
Aug 21 01:07:02.155: INFO: stderr: ""
Aug 21 01:07:02.155: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-501-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:07:21.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1457" for this suite.

• [SLOW TEST:52.065 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":165,"skipped":2696,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:07:21.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-334b4160-9272-43e8-aa0d-a936cb1ae7e2
STEP: Creating a pod to test consume configMaps
Aug 21 01:07:21.795: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf" in namespace "projected-2575" to be "success or failure"
Aug 21 01:07:21.799: INFO: Pod "pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370277ms
Aug 21 01:07:23.866: INFO: Pod "pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071107043s
Aug 21 01:07:25.873: INFO: Pod "pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078112628s
STEP: Saw pod success
Aug 21 01:07:25.873: INFO: Pod "pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf" satisfied condition "success or failure"
Aug 21 01:07:25.878: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 01:07:25.944: INFO: Waiting for pod pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf to disappear
Aug 21 01:07:25.985: INFO: Pod pod-projected-configmaps-853e54c8-0e61-4cf5-a231-576e48662ebf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:07:25.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2575" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2759,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:07:25.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 01:07:26.339: INFO: Waiting up to 5m0s for pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b" in namespace "emptydir-9023" to be "success or failure"
Aug 21 01:07:26.345: INFO: Pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.754649ms
Aug 21 01:07:28.357: INFO: Pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017285986s
Aug 21 01:07:30.362: INFO: Pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b": Phase="Running", Reason="", readiness=true. Elapsed: 4.022829205s
Aug 21 01:07:32.369: INFO: Pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02948402s
STEP: Saw pod success
Aug 21 01:07:32.369: INFO: Pod "pod-f877d029-058f-4775-b820-27b6cb3cb77b" satisfied condition "success or failure"
Aug 21 01:07:32.374: INFO: Trying to get logs from node jerma-worker2 pod pod-f877d029-058f-4775-b820-27b6cb3cb77b container test-container: 
STEP: delete the pod
Aug 21 01:07:32.393: INFO: Waiting for pod pod-f877d029-058f-4775-b820-27b6cb3cb77b to disappear
Aug 21 01:07:32.398: INFO: Pod pod-f877d029-058f-4775-b820-27b6cb3cb77b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:07:32.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9023" for this suite.

• [SLOW TEST:6.424 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2780,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:07:32.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 21 01:07:33.095: INFO: created pod pod-service-account-defaultsa
Aug 21 01:07:33.095: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 21 01:07:33.115: INFO: created pod pod-service-account-mountsa
Aug 21 01:07:33.115: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 21 01:07:33.150: INFO: created pod pod-service-account-nomountsa
Aug 21 01:07:33.150: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 21 01:07:33.226: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 21 01:07:33.226: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 21 01:07:33.238: INFO: created pod pod-service-account-mountsa-mountspec
Aug 21 01:07:33.238: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 21 01:07:33.285: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 21 01:07:33.285: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 21 01:07:33.308: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 21 01:07:33.309: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 21 01:07:33.369: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 21 01:07:33.370: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 21 01:07:33.387: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 21 01:07:33.387: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:07:33.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2049" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":168,"skipped":2784,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:07:33.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5356.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 01:07:49.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.700: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.703: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.716: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.719: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.723: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.727: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:49.735: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:07:54.742: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.747: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.751: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.756: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.769: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.774: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.778: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.782: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:54.790: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:07:59.743: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.748: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.753: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.757: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.771: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.775: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.779: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.783: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:07:59.791: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:08:04.741: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.746: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.750: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.753: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.764: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.768: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.772: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.775: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:04.784: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:08:09.742: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.747: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.751: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.755: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.765: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.769: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.772: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.775: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:09.783: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:08:14.742: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.746: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.750: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.755: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.766: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.771: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.775: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.778: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local from pod dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a: the server could not find the requested resource (get pods dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a)
Aug 21 01:08:14.785: INFO: Lookups using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5356.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5356.svc.cluster.local jessie_udp@dns-test-service-2.dns-5356.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5356.svc.cluster.local]

Aug 21 01:08:19.784: INFO: DNS probes using dns-5356/dns-test-519a5e71-96e7-47f4-8692-35dd64c5bb8a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:08:20.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5356" for this suite.

• [SLOW TEST:46.748 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":169,"skipped":2806,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:08:20.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:08:20.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:08:24.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8607" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2821,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:08:24.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-123/secret-test-72427916-030f-4226-9424-d5729033b2c3
STEP: Creating a pod to test consume secrets
Aug 21 01:08:24.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88" in namespace "secrets-123" to be "success or failure"
Aug 21 01:08:24.914: INFO: Pod "pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854481ms
Aug 21 01:08:26.955: INFO: Pod "pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048171948s
Aug 21 01:08:29.112: INFO: Pod "pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204800368s
STEP: Saw pod success
Aug 21 01:08:29.112: INFO: Pod "pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88" satisfied condition "success or failure"
Aug 21 01:08:29.117: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88 container env-test: 
STEP: delete the pod
Aug 21 01:08:29.138: INFO: Waiting for pod pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88 to disappear
Aug 21 01:08:29.148: INFO: Pod pod-configmaps-cfb6b1a5-9e54-4ef5-bb2b-49f80fd8fc88 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:08:29.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-123" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2827,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:08:29.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 21 01:08:29.259: INFO: Waiting up to 5m0s for pod "var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20" in namespace "var-expansion-4431" to be "success or failure"
Aug 21 01:08:29.269: INFO: Pod "var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.452821ms
Aug 21 01:08:31.316: INFO: Pod "var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056493761s
Aug 21 01:08:33.323: INFO: Pod "var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063571444s
STEP: Saw pod success
Aug 21 01:08:33.323: INFO: Pod "var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20" satisfied condition "success or failure"
Aug 21 01:08:33.369: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20 container dapi-container: 
STEP: delete the pod
Aug 21 01:08:33.425: INFO: Waiting for pod var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20 to disappear
Aug 21 01:08:33.430: INFO: Pod var-expansion-7ff3b37f-bd7c-4b56-b5c0-761f496d1e20 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:08:33.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4431" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2834,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:08:33.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 21 01:08:33.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 21 01:09:32.451: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 01:09:42.526: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:10:51.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5705" for this suite.

• [SLOW TEST:137.921 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":173,"skipped":2850,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:10:51.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9929
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9929
STEP: Creating statefulset with conflicting port in namespace statefulset-9929
STEP: Waiting until pod test-pod will start running in namespace statefulset-9929
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9929
Aug 21 01:10:55.542: INFO: Observed stateful pod in namespace: statefulset-9929, name: ss-0, uid: baea495e-9dfa-432b-9c08-c20c7c5082f7, status phase: Pending. Waiting for statefulset controller to delete.
Aug 21 01:10:56.079: INFO: Observed stateful pod in namespace: statefulset-9929, name: ss-0, uid: baea495e-9dfa-432b-9c08-c20c7c5082f7, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 01:10:56.087: INFO: Observed stateful pod in namespace: statefulset-9929, name: ss-0, uid: baea495e-9dfa-432b-9c08-c20c7c5082f7, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 01:10:56.093: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9929
STEP: Removing pod with conflicting port in namespace statefulset-9929
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9929 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 01:11:02.204: INFO: Deleting all statefulset in ns statefulset-9929
Aug 21 01:11:02.209: INFO: Scaling statefulset ss to 0
Aug 21 01:11:12.389: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:11:12.393: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:11:12.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9929" for this suite.

• [SLOW TEST:21.102 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":174,"skipped":2854,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:11:12.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:11:12.533: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:11:12.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-833" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":175,"skipped":2892,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:11:12.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 01:11:17.100: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:11:17.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5089" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:11:17.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:11:53.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3090" for this suite.

• [SLOW TEST:35.671 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2928,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:11:53.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 21 01:11:59.981: INFO: Successfully updated pod "labelsupdate6fd8be21-e0b9-4837-b6e2-c1ee293f87a7"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:12:02.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2437" for this suite.

• [SLOW TEST:9.016 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2937,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:12:02.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-7365
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 01:12:02.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 01:12:26.326: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.55 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7365 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 01:12:26.326: INFO: >>> kubeConfig: /root/.kube/config
I0821 01:12:26.382887       7 log.go:172] (0x4000a669a0) (0x40026cf680) Create stream
I0821 01:12:26.383049       7 log.go:172] (0x4000a669a0) (0x40026cf680) Stream added, broadcasting: 1
I0821 01:12:26.386449       7 log.go:172] (0x4000a669a0) Reply frame received for 1
I0821 01:12:26.386603       7 log.go:172] (0x4000a669a0) (0x4002129180) Create stream
I0821 01:12:26.386689       7 log.go:172] (0x4000a669a0) (0x4002129180) Stream added, broadcasting: 3
I0821 01:12:26.388083       7 log.go:172] (0x4000a669a0) Reply frame received for 3
I0821 01:12:26.388278       7 log.go:172] (0x4000a669a0) (0x40021292c0) Create stream
I0821 01:12:26.388373       7 log.go:172] (0x4000a669a0) (0x40021292c0) Stream added, broadcasting: 5
I0821 01:12:26.389965       7 log.go:172] (0x4000a669a0) Reply frame received for 5
I0821 01:12:27.465126       7 log.go:172] (0x4000a669a0) Data frame received for 3
I0821 01:12:27.465344       7 log.go:172] (0x4002129180) (3) Data frame handling
I0821 01:12:27.465540       7 log.go:172] (0x4000a669a0) Data frame received for 5
I0821 01:12:27.465716       7 log.go:172] (0x40021292c0) (5) Data frame handling
I0821 01:12:27.465854       7 log.go:172] (0x4002129180) (3) Data frame sent
I0821 01:12:27.466016       7 log.go:172] (0x4000a669a0) Data frame received for 3
I0821 01:12:27.466132       7 log.go:172] (0x4002129180) (3) Data frame handling
I0821 01:12:27.467322       7 log.go:172] (0x4000a669a0) Data frame received for 1
I0821 01:12:27.467525       7 log.go:172] (0x40026cf680) (1) Data frame handling
I0821 01:12:27.467682       7 log.go:172] (0x40026cf680) (1) Data frame sent
I0821 01:12:27.467868       7 log.go:172] (0x4000a669a0) (0x40026cf680) Stream removed, broadcasting: 1
I0821 01:12:27.468104       7 log.go:172] (0x4000a669a0) Go away received
I0821 01:12:27.468426       7 log.go:172] (0x4000a669a0) (0x40026cf680) Stream removed, broadcasting: 1
I0821 01:12:27.468615       7 log.go:172] (0x4000a669a0) (0x4002129180) Stream removed, broadcasting: 3
I0821 01:12:27.468822       7 log.go:172] (0x4000a669a0) (0x40021292c0) Stream removed, broadcasting: 5
Aug 21 01:12:27.468: INFO: Found all expected endpoints: [netserver-0]
Aug 21 01:12:27.474: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.70 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7365 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 01:12:27.474: INFO: >>> kubeConfig: /root/.kube/config
I0821 01:12:27.541991       7 log.go:172] (0x4000a66fd0) (0x40026cfb80) Create stream
I0821 01:12:27.542147       7 log.go:172] (0x4000a66fd0) (0x40026cfb80) Stream added, broadcasting: 1
I0821 01:12:27.545225       7 log.go:172] (0x4000a66fd0) Reply frame received for 1
I0821 01:12:27.545394       7 log.go:172] (0x4000a66fd0) (0x40026cfc20) Create stream
I0821 01:12:27.545473       7 log.go:172] (0x4000a66fd0) (0x40026cfc20) Stream added, broadcasting: 3
I0821 01:12:27.547015       7 log.go:172] (0x4000a66fd0) Reply frame received for 3
I0821 01:12:27.547156       7 log.go:172] (0x4000a66fd0) (0x40022cc0a0) Create stream
I0821 01:12:27.547240       7 log.go:172] (0x4000a66fd0) (0x40022cc0a0) Stream added, broadcasting: 5
I0821 01:12:27.548528       7 log.go:172] (0x4000a66fd0) Reply frame received for 5
I0821 01:12:28.628227       7 log.go:172] (0x4000a66fd0) Data frame received for 3
I0821 01:12:28.628482       7 log.go:172] (0x40026cfc20) (3) Data frame handling
I0821 01:12:28.628696       7 log.go:172] (0x40026cfc20) (3) Data frame sent
I0821 01:12:28.629109       7 log.go:172] (0x4000a66fd0) Data frame received for 3
I0821 01:12:28.629289       7 log.go:172] (0x40026cfc20) (3) Data frame handling
I0821 01:12:28.629438       7 log.go:172] (0x4000a66fd0) Data frame received for 5
I0821 01:12:28.629636       7 log.go:172] (0x40022cc0a0) (5) Data frame handling
I0821 01:12:28.630580       7 log.go:172] (0x4000a66fd0) Data frame received for 1
I0821 01:12:28.630755       7 log.go:172] (0x40026cfb80) (1) Data frame handling
I0821 01:12:28.630901       7 log.go:172] (0x40026cfb80) (1) Data frame sent
I0821 01:12:28.631054       7 log.go:172] (0x4000a66fd0) (0x40026cfb80) Stream removed, broadcasting: 1
I0821 01:12:28.631226       7 log.go:172] (0x4000a66fd0) Go away received
I0821 01:12:28.631466       7 log.go:172] (0x4000a66fd0) (0x40026cfb80) Stream removed, broadcasting: 1
I0821 01:12:28.631626       7 log.go:172] (0x4000a66fd0) (0x40026cfc20) Stream removed, broadcasting: 3
I0821 01:12:28.631808       7 log.go:172] (0x4000a66fd0) (0x40022cc0a0) Stream removed, broadcasting: 5
Aug 21 01:12:28.631: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:12:28.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7365" for this suite.

• [SLOW TEST:26.616 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2945,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:12:28.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 21 01:12:28.732: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 01:12:28.757: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 01:12:28.762: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 21 01:12:28.775: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.775: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 01:12:28.775: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.775: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 01:12:28.775: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.775: INFO: 	Container app ready: true, restart count 0
Aug 21 01:12:28.775: INFO: netserver-0 from pod-network-test-7365 started at 2020-08-21 01:12:02 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.775: INFO: 	Container webserver ready: true, restart count 0
Aug 21 01:12:28.775: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 21 01:12:28.801: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 01:12:28.802: INFO: test-container-pod from pod-network-test-7365 started at 2020-08-21 01:12:20 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container webserver ready: true, restart count 0
Aug 21 01:12:28.802: INFO: host-test-container-pod from pod-network-test-7365 started at 2020-08-21 01:12:20 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container agnhost ready: true, restart count 0
Aug 21 01:12:28.802: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 01:12:28.802: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container app ready: true, restart count 0
Aug 21 01:12:28.802: INFO: netserver-1 from pod-network-test-7365 started at 2020-08-21 01:12:02 +0000 UTC (1 container statuses recorded)
Aug 21 01:12:28.802: INFO: 	Container webserver ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-61985a9d-17b2-4100-802a-735bcb754fd1 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-61985a9d-17b2-4100-802a-735bcb754fd1 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-61985a9d-17b2-4100-802a-735bcb754fd1
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:12:47.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7516" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.374 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":180,"skipped":2955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:12:47.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:12:47.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 21 01:13:06.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 create -f -'
Aug 21 01:13:15.911: INFO: stderr: ""
Aug 21 01:13:15.911: INFO: stdout: "e2e-test-crd-publish-openapi-8034-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 21 01:13:15.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 delete e2e-test-crd-publish-openapi-8034-crds test-foo'
Aug 21 01:13:17.259: INFO: stderr: ""
Aug 21 01:13:17.260: INFO: stdout: "e2e-test-crd-publish-openapi-8034-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 21 01:13:17.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 apply -f -'
Aug 21 01:13:18.877: INFO: stderr: ""
Aug 21 01:13:18.877: INFO: stdout: "e2e-test-crd-publish-openapi-8034-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 21 01:13:18.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 delete e2e-test-crd-publish-openapi-8034-crds test-foo'
Aug 21 01:13:20.141: INFO: stderr: ""
Aug 21 01:13:20.141: INFO: stdout: "e2e-test-crd-publish-openapi-8034-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 21 01:13:20.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 create -f -'
Aug 21 01:13:21.633: INFO: rc: 1
Aug 21 01:13:21.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 apply -f -'
Aug 21 01:13:23.173: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 21 01:13:23.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 create -f -'
Aug 21 01:13:24.673: INFO: rc: 1
Aug 21 01:13:24.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7755 apply -f -'
Aug 21 01:13:26.182: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 21 01:13:26.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8034-crds'
Aug 21 01:13:27.983: INFO: stderr: ""
Aug 21 01:13:27.983: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8034-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 21 01:13:27.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8034-crds.metadata'
Aug 21 01:13:29.630: INFO: stderr: ""
Aug 21 01:13:29.630: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8034-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 21 01:13:29.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8034-crds.spec'
Aug 21 01:13:31.153: INFO: stderr: ""
Aug 21 01:13:31.153: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8034-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 21 01:13:31.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8034-crds.spec.bars'
Aug 21 01:13:32.670: INFO: stderr: ""
Aug 21 01:13:32.671: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8034-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 21 01:13:32.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8034-crds.spec.bars2'
Aug 21 01:13:34.194: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:13:53.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7755" for this suite.

• [SLOW TEST:66.665 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":181,"skipped":2987,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:13:53.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 21 01:13:53.740: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:14:05.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3511" for this suite.

• [SLOW TEST:11.509 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":182,"skipped":2997,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:14:05.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-dd1326f6-7302-4a21-8c77-d833429f8597
STEP: Creating a pod to test consume secrets
Aug 21 01:14:05.297: INFO: Waiting up to 5m0s for pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf" in namespace "secrets-9844" to be "success or failure"
Aug 21 01:14:05.345: INFO: Pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf": Phase="Pending", Reason="", readiness=false. Elapsed: 47.64897ms
Aug 21 01:14:07.350: INFO: Pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05325222s
Aug 21 01:14:09.404: INFO: Pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107187893s
Aug 21 01:14:11.411: INFO: Pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114014771s
STEP: Saw pod success
Aug 21 01:14:11.411: INFO: Pod "pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf" satisfied condition "success or failure"
Aug 21 01:14:11.416: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf container secret-volume-test: 
STEP: delete the pod
Aug 21 01:14:11.455: INFO: Waiting for pod pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf to disappear
Aug 21 01:14:11.530: INFO: Pod pod-secrets-9690364f-418f-428a-b38b-759b0c9a8edf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:14:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9844" for this suite.

• [SLOW TEST:6.362 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3006,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:14:11.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0821 01:14:42.257307       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 01:14:42.257: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:14:42.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2183" for this suite.

• [SLOW TEST:30.701 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":184,"skipped":3010,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:14:42.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 21 01:14:42.323: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:14:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3171" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":185,"skipped":3071,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:14:43.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4114, will wait for the garbage collector to delete the pods
Aug 21 01:14:49.583: INFO: Deleting Job.batch foo took: 8.351398ms
Aug 21 01:14:49.683: INFO: Terminating Job.batch foo pods took: 100.563622ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:15:31.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4114" for this suite.

• [SLOW TEST:48.493 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":186,"skipped":3091,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:15:31.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e9811ad4-c3c0-428c-b45d-be6f0d33e822
STEP: Creating a pod to test consume secrets
Aug 21 01:15:32.269: INFO: Waiting up to 5m0s for pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4" in namespace "secrets-2794" to be "success or failure"
Aug 21 01:15:32.395: INFO: Pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4": Phase="Pending", Reason="", readiness=false. Elapsed: 126.25387ms
Aug 21 01:15:35.253: INFO: Pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984250778s
Aug 21 01:15:37.285: INFO: Pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4": Phase="Running", Reason="", readiness=true. Elapsed: 5.015674925s
Aug 21 01:15:39.290: INFO: Pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.020575346s
STEP: Saw pod success
Aug 21 01:15:39.290: INFO: Pod "pod-secrets-cce64596-81c5-44ab-891c-777a036260d4" satisfied condition "success or failure"
Aug 21 01:15:39.294: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cce64596-81c5-44ab-891c-777a036260d4 container secret-env-test: 
STEP: delete the pod
Aug 21 01:15:39.321: INFO: Waiting for pod pod-secrets-cce64596-81c5-44ab-891c-777a036260d4 to disappear
Aug 21 01:15:39.331: INFO: Pod pod-secrets-cce64596-81c5-44ab-891c-777a036260d4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:15:39.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2794" for this suite.

• [SLOW TEST:7.451 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3092,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:15:39.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:15:55.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7788" for this suite.

• [SLOW TEST:16.314 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":188,"skipped":3094,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:15:55.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3465
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3465
I0821 01:15:56.275401       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3465, replica count: 2
I0821 01:15:59.326581       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 01:16:02.327054       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 01:16:02.327: INFO: Creating new exec pod
Aug 21 01:16:07.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpodmttcn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 01:16:08.895: INFO: stderr: "I0821 01:16:08.781973    2975 log.go:172] (0x4000aea0b0) (0x40006f4140) Create stream\nI0821 01:16:08.785828    2975 log.go:172] (0x4000aea0b0) (0x40006f4140) Stream added, broadcasting: 1\nI0821 01:16:08.800473    2975 log.go:172] (0x4000aea0b0) Reply frame received for 1\nI0821 01:16:08.801846    2975 log.go:172] (0x4000aea0b0) (0x4000754000) Create stream\nI0821 01:16:08.801964    2975 log.go:172] (0x4000aea0b0) (0x4000754000) Stream added, broadcasting: 3\nI0821 01:16:08.803789    2975 log.go:172] (0x4000aea0b0) Reply frame received for 3\nI0821 01:16:08.803982    2975 log.go:172] (0x4000aea0b0) (0x40006f41e0) Create stream\nI0821 01:16:08.804025    2975 log.go:172] (0x4000aea0b0) (0x40006f41e0) Stream added, broadcasting: 5\nI0821 01:16:08.805526    2975 log.go:172] (0x4000aea0b0) Reply frame received for 5\nI0821 01:16:08.878675    2975 log.go:172] (0x4000aea0b0) Data frame received for 5\nI0821 01:16:08.878866    2975 log.go:172] (0x4000aea0b0) Data frame received for 3\nI0821 01:16:08.879118    2975 log.go:172] (0x4000754000) (3) Data frame handling\nI0821 01:16:08.879282    2975 log.go:172] (0x40006f41e0) (5) Data frame handling\nI0821 01:16:08.879824    2975 log.go:172] (0x40006f41e0) (5) Data frame sent\nI0821 01:16:08.880107    2975 log.go:172] (0x4000aea0b0) Data frame received for 1\nI0821 01:16:08.880192    2975 log.go:172] (0x40006f4140) (1) Data frame handling\nI0821 01:16:08.880285    2975 log.go:172] (0x40006f4140) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 01:16:08.880683    2975 log.go:172] (0x4000aea0b0) Data frame received for 5\nI0821 01:16:08.880836    2975 log.go:172] (0x40006f41e0) (5) Data frame handling\nI0821 01:16:08.882317    2975 log.go:172] (0x4000aea0b0) (0x40006f4140) Stream removed, broadcasting: 1\nI0821 01:16:08.884045    2975 log.go:172] (0x4000aea0b0) Go away received\nI0821 01:16:08.886560    2975 log.go:172] (0x4000aea0b0) (0x40006f4140) Stream removed, broadcasting: 1\nI0821 01:16:08.886897    2975 log.go:172] (0x4000aea0b0) (0x4000754000) Stream removed, broadcasting: 3\nI0821 01:16:08.887115    2975 log.go:172] (0x4000aea0b0) (0x40006f41e0) Stream removed, broadcasting: 5\n"
Aug 21 01:16:08.896: INFO: stdout: ""
Aug 21 01:16:08.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpodmttcn -- /bin/sh -x -c nc -zv -t -w 2 10.96.186.100 80'
Aug 21 01:16:10.302: INFO: stderr: "I0821 01:16:10.206246    2998 log.go:172] (0x4000aa6a50) (0x40007eda40) Create stream\nI0821 01:16:10.209820    2998 log.go:172] (0x4000aa6a50) (0x40007eda40) Stream added, broadcasting: 1\nI0821 01:16:10.223093    2998 log.go:172] (0x4000aa6a50) Reply frame received for 1\nI0821 01:16:10.223963    2998 log.go:172] (0x4000aa6a50) (0x40007edc20) Create stream\nI0821 01:16:10.224044    2998 log.go:172] (0x4000aa6a50) (0x40007edc20) Stream added, broadcasting: 3\nI0821 01:16:10.225738    2998 log.go:172] (0x4000aa6a50) Reply frame received for 3\nI0821 01:16:10.226101    2998 log.go:172] (0x4000aa6a50) (0x4000ae0000) Create stream\nI0821 01:16:10.226193    2998 log.go:172] (0x4000aa6a50) (0x4000ae0000) Stream added, broadcasting: 5\nI0821 01:16:10.227581    2998 log.go:172] (0x4000aa6a50) Reply frame received for 5\nI0821 01:16:10.284576    2998 log.go:172] (0x4000aa6a50) Data frame received for 3\nI0821 01:16:10.284923    2998 log.go:172] (0x4000aa6a50) Data frame received for 5\nI0821 01:16:10.285096    2998 log.go:172] (0x4000ae0000) (5) Data frame handling\nI0821 01:16:10.285413    2998 log.go:172] (0x40007edc20) (3) Data frame handling\nI0821 01:16:10.285617    2998 log.go:172] (0x4000aa6a50) Data frame received for 1\nI0821 01:16:10.285686    2998 log.go:172] (0x40007eda40) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.186.100 80\nConnection to 10.96.186.100 80 port [tcp/http] succeeded!\nI0821 01:16:10.286905    2998 log.go:172] (0x40007eda40) (1) Data frame sent\nI0821 01:16:10.287464    2998 log.go:172] (0x4000ae0000) (5) Data frame sent\nI0821 01:16:10.287541    2998 log.go:172] (0x4000aa6a50) Data frame received for 5\nI0821 01:16:10.287597    2998 log.go:172] (0x4000ae0000) (5) Data frame handling\nI0821 01:16:10.288626    2998 log.go:172] (0x4000aa6a50) (0x40007eda40) Stream removed, broadcasting: 1\nI0821 01:16:10.290710    2998 log.go:172] (0x4000aa6a50) Go away received\nI0821 01:16:10.292309    2998 log.go:172] (0x4000aa6a50) (0x40007eda40) Stream removed, broadcasting: 1\nI0821 01:16:10.292551    2998 log.go:172] (0x4000aa6a50) (0x40007edc20) Stream removed, broadcasting: 3\nI0821 01:16:10.292853    2998 log.go:172] (0x4000aa6a50) (0x4000ae0000) Stream removed, broadcasting: 5\n"
Aug 21 01:16:10.303: INFO: stdout: ""
Aug 21 01:16:10.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpodmttcn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31409'
Aug 21 01:16:11.960: INFO: stderr: "I0821 01:16:11.876041    3020 log.go:172] (0x4000ac4a50) (0x4000821ae0) Create stream\nI0821 01:16:11.881513    3020 log.go:172] (0x4000ac4a50) (0x4000821ae0) Stream added, broadcasting: 1\nI0821 01:16:11.893907    3020 log.go:172] (0x4000ac4a50) Reply frame received for 1\nI0821 01:16:11.894465    3020 log.go:172] (0x4000ac4a50) (0x4000a46000) Create stream\nI0821 01:16:11.894533    3020 log.go:172] (0x4000ac4a50) (0x4000a46000) Stream added, broadcasting: 3\nI0821 01:16:11.895884    3020 log.go:172] (0x4000ac4a50) Reply frame received for 3\nI0821 01:16:11.896183    3020 log.go:172] (0x4000ac4a50) (0x4000684000) Create stream\nI0821 01:16:11.896251    3020 log.go:172] (0x4000ac4a50) (0x4000684000) Stream added, broadcasting: 5\nI0821 01:16:11.897608    3020 log.go:172] (0x4000ac4a50) Reply frame received for 5\nI0821 01:16:11.944139    3020 log.go:172] (0x4000ac4a50) Data frame received for 5\nI0821 01:16:11.944643    3020 log.go:172] (0x4000ac4a50) Data frame received for 3\nI0821 01:16:11.944889    3020 log.go:172] (0x4000a46000) (3) Data frame handling\nI0821 01:16:11.945003    3020 log.go:172] (0x4000684000) (5) Data frame handling\nI0821 01:16:11.945250    3020 log.go:172] (0x4000ac4a50) Data frame received for 1\nI0821 01:16:11.945339    3020 log.go:172] (0x4000821ae0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31409\nConnection to 172.18.0.6 31409 port [tcp/31409] succeeded!\nI0821 01:16:11.947752    3020 log.go:172] (0x4000684000) (5) Data frame sent\nI0821 01:16:11.947922    3020 log.go:172] (0x4000821ae0) (1) Data frame sent\nI0821 01:16:11.948029    3020 log.go:172] (0x4000ac4a50) Data frame received for 5\nI0821 01:16:11.948096    3020 log.go:172] (0x4000684000) (5) Data frame handling\nI0821 01:16:11.949437    3020 log.go:172] (0x4000ac4a50) (0x4000821ae0) Stream removed, broadcasting: 1\nI0821 01:16:11.951367    3020 log.go:172] (0x4000ac4a50) Go away received\nI0821 01:16:11.953230    3020 log.go:172] (0x4000ac4a50) (0x4000821ae0) Stream removed, broadcasting: 1\nI0821 01:16:11.953571    3020 log.go:172] (0x4000ac4a50) (0x4000a46000) Stream removed, broadcasting: 3\nI0821 01:16:11.953784    3020 log.go:172] (0x4000ac4a50) (0x4000684000) Stream removed, broadcasting: 5\n"
Aug 21 01:16:11.961: INFO: stdout: ""
Aug 21 01:16:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpodmttcn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31409'
Aug 21 01:16:13.368: INFO: stderr: "I0821 01:16:13.250468    3043 log.go:172] (0x4000a8c000) (0x400067a780) Create stream\nI0821 01:16:13.253264    3043 log.go:172] (0x4000a8c000) (0x400067a780) Stream added, broadcasting: 1\nI0821 01:16:13.267698    3043 log.go:172] (0x4000a8c000) Reply frame received for 1\nI0821 01:16:13.268662    3043 log.go:172] (0x4000a8c000) (0x40006afb80) Create stream\nI0821 01:16:13.268831    3043 log.go:172] (0x4000a8c000) (0x40006afb80) Stream added, broadcasting: 3\nI0821 01:16:13.270849    3043 log.go:172] (0x4000a8c000) Reply frame received for 3\nI0821 01:16:13.271322    3043 log.go:172] (0x4000a8c000) (0x4000650000) Create stream\nI0821 01:16:13.271458    3043 log.go:172] (0x4000a8c000) (0x4000650000) Stream added, broadcasting: 5\nI0821 01:16:13.272911    3043 log.go:172] (0x4000a8c000) Reply frame received for 5\nI0821 01:16:13.351461    3043 log.go:172] (0x4000a8c000) Data frame received for 5\nI0821 01:16:13.351727    3043 log.go:172] (0x4000a8c000) Data frame received for 1\nI0821 01:16:13.351926    3043 log.go:172] (0x4000a8c000) Data frame received for 3\nI0821 01:16:13.352010    3043 log.go:172] (0x40006afb80) (3) Data frame handling\nI0821 01:16:13.352165    3043 log.go:172] (0x400067a780) (1) Data frame handling\nI0821 01:16:13.352617    3043 log.go:172] (0x4000650000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 31409\nConnection to 172.18.0.3 31409 port [tcp/31409] succeeded!\nI0821 01:16:13.354580    3043 log.go:172] (0x4000650000) (5) Data frame sent\nI0821 01:16:13.354733    3043 log.go:172] (0x400067a780) (1) Data frame sent\nI0821 01:16:13.355236    3043 log.go:172] (0x4000a8c000) Data frame received for 5\nI0821 01:16:13.355338    3043 log.go:172] (0x4000650000) (5) Data frame handling\nI0821 01:16:13.356016    3043 log.go:172] (0x4000a8c000) (0x400067a780) Stream removed, broadcasting: 1\nI0821 01:16:13.357519    3043 log.go:172] (0x4000a8c000) Go away received\nI0821 01:16:13.360510    3043 log.go:172] (0x4000a8c000) (0x400067a780) Stream removed, broadcasting: 1\nI0821 01:16:13.360992    3043 log.go:172] (0x4000a8c000) (0x40006afb80) Stream removed, broadcasting: 3\nI0821 01:16:13.361188    3043 log.go:172] (0x4000a8c000) (0x4000650000) Stream removed, broadcasting: 5\n"
Aug 21 01:16:13.368: INFO: stdout: ""
Aug 21 01:16:13.368: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:13.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3465" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.761 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":189,"skipped":3104,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:13.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 21 01:16:20.196: INFO: Successfully updated pod "labelsupdatea1f1f40a-ab60-45d9-b405-a92e9ddf85e5"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:22.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2607" for this suite.

• [SLOW TEST:8.763 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3121,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:22.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:22.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3281" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":191,"skipped":3131,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:22.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 01:16:23.036: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:23.282: INFO: Number of nodes with available pods: 0
Aug 21 01:16:23.283: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:24.293: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:24.299: INFO: Number of nodes with available pods: 0
Aug 21 01:16:24.299: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:25.886: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:26.153: INFO: Number of nodes with available pods: 0
Aug 21 01:16:26.153: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:26.669: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:26.858: INFO: Number of nodes with available pods: 0
Aug 21 01:16:26.858: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:27.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:27.491: INFO: Number of nodes with available pods: 0
Aug 21 01:16:27.491: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:28.375: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:28.380: INFO: Number of nodes with available pods: 0
Aug 21 01:16:28.380: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:29.322: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:29.388: INFO: Number of nodes with available pods: 1
Aug 21 01:16:29.388: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:30.295: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:30.301: INFO: Number of nodes with available pods: 2
Aug 21 01:16:30.301: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 21 01:16:30.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:30.360: INFO: Number of nodes with available pods: 1
Aug 21 01:16:30.360: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:31.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:31.413: INFO: Number of nodes with available pods: 1
Aug 21 01:16:31.413: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:32.372: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:32.378: INFO: Number of nodes with available pods: 1
Aug 21 01:16:32.378: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:33.370: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:33.428: INFO: Number of nodes with available pods: 1
Aug 21 01:16:33.429: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:16:34.370: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:16:34.376: INFO: Number of nodes with available pods: 2
Aug 21 01:16:34.376: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3206, will wait for the garbage collector to delete the pods
Aug 21 01:16:34.465: INFO: Deleting DaemonSet.extensions daemon-set took: 25.303763ms
Aug 21 01:16:34.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.777227ms
Aug 21 01:16:41.707: INFO: Number of nodes with available pods: 0
Aug 21 01:16:41.707: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 01:16:41.713: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3206/daemonsets","resourceVersion":"1992901"},"items":null}

Aug 21 01:16:41.720: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3206/pods","resourceVersion":"1992902"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:41.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3206" for this suite.

• [SLOW TEST:18.897 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":192,"skipped":3143,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:41.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-56440874-d54d-4893-9dcc-d035a8f32969
STEP: Creating a pod to test consume secrets
Aug 21 01:16:41.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a" in namespace "projected-3469" to be "success or failure"
Aug 21 01:16:41.888: INFO: Pod "pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302251ms
Aug 21 01:16:43.894: INFO: Pod "pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010736938s
Aug 21 01:16:45.973: INFO: Pod "pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089642953s
STEP: Saw pod success
Aug 21 01:16:45.974: INFO: Pod "pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a" satisfied condition "success or failure"
Aug 21 01:16:45.991: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 01:16:46.100: INFO: Waiting for pod pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a to disappear
Aug 21 01:16:46.128: INFO: Pod pod-projected-secrets-e65e2685-caa9-4207-bd8a-55aa8075fe3a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:46.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3469" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3144,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:46.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8010" for this suite.

• [SLOW TEST:8.169 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3144,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:54.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 01:16:54.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9661'
Aug 21 01:16:56.157: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 21 01:16:56.157: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 21 01:16:56.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9661'
Aug 21 01:16:57.564: INFO: stderr: ""
Aug 21 01:16:57.564: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:16:57.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9661" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":195,"skipped":3153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:16:57.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 01:17:10.847: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 01:17:10.864: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 01:17:12.865: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 01:17:12.912: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 01:17:14.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 01:17:14.886: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 01:17:16.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 01:17:16.871: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:17:16.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2671" for this suite.

• [SLOW TEST:19.306 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3178,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:17:16.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:17:17.029: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 21 01:17:17.317: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 21 01:17:17.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:19.120: INFO: stderr: ""
Aug 21 01:17:19.120: INFO: stdout: "service/agnhost-slave created\n"
Aug 21 01:17:19.121: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 21 01:17:19.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:20.726: INFO: stderr: ""
Aug 21 01:17:20.727: INFO: stdout: "service/agnhost-master created\n"
Aug 21 01:17:20.728: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 21 01:17:20.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:22.818: INFO: stderr: ""
Aug 21 01:17:22.818: INFO: stdout: "service/frontend created\n"
Aug 21 01:17:22.823: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 21 01:17:22.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:24.487: INFO: stderr: ""
Aug 21 01:17:24.487: INFO: stdout: "deployment.apps/frontend created\n"
Aug 21 01:17:24.488: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 21 01:17:24.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:26.098: INFO: stderr: ""
Aug 21 01:17:26.098: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 21 01:17:26.099: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 21 01:17:26.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5214'
Aug 21 01:17:27.854: INFO: stderr: ""
Aug 21 01:17:27.854: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 21 01:17:27.854: INFO: Waiting for all frontend pods to be Running.
Aug 21 01:17:32.907: INFO: Waiting for frontend to serve content.
Aug 21 01:17:32.921: INFO: Trying to add a new entry to the guestbook.
Aug 21 01:17:32.930: INFO: Verifying that added entry can be retrieved.
Aug 21 01:17:32.938: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Aug 21 01:17:37.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:39.310: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:39.310: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 01:17:39.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:40.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:40.657: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 01:17:40.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:41.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:41.911: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 01:17:41.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:43.122: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:43.122: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 01:17:43.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:44.374: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:44.375: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 01:17:44.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5214'
Aug 21 01:17:45.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 01:17:45.945: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:17:45.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5214" for this suite.

• [SLOW TEST:28.798 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":198,"skipped":3276,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:17:45.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-a0bc065e-cbb0-4d1c-9c3f-3644b2c438a9
STEP: Creating a pod to test consume configMaps
Aug 21 01:17:46.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8" in namespace "configmap-5786" to be "success or failure"
Aug 21 01:17:46.362: INFO: Pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 153.197469ms
Aug 21 01:17:48.368: INFO: Pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159337585s
Aug 21 01:17:50.542: INFO: Pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8": Phase="Running", Reason="", readiness=true. Elapsed: 4.333268037s
Aug 21 01:17:52.619: INFO: Pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.410264713s
STEP: Saw pod success
Aug 21 01:17:52.619: INFO: Pod "pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8" satisfied condition "success or failure"
Aug 21 01:17:52.842: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8 container configmap-volume-test: 
STEP: delete the pod
Aug 21 01:17:53.598: INFO: Waiting for pod pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8 to disappear
Aug 21 01:17:53.879: INFO: Pod pod-configmaps-5a05f237-6d10-4e7d-a7e4-385849072ff8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:17:53.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5786" for this suite.

• [SLOW TEST:8.192 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3286,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:17:54.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 01:17:55.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4072'
Aug 21 01:17:56.367: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 21 01:17:56.367: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 21 01:17:56.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4072'
Aug 21 01:17:57.753: INFO: stderr: ""
Aug 21 01:17:57.753: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:17:57.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4072" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":200,"skipped":3300,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:17:57.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 01:18:05.932: INFO: DNS probes using dns-6952/dns-test-0e3da8b8-2a9b-470b-94e6-628e590293bd succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:18:06.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6952" for this suite.

• [SLOW TEST:8.811 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":201,"skipped":3309,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:18:06.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 01:18:07.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9260'
Aug 21 01:18:08.544: INFO: stderr: ""
Aug 21 01:18:08.545: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 21 01:18:13.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9260 -o json'
Aug 21 01:18:14.813: INFO: stderr: ""
Aug 21 01:18:14.813: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-21T01:18:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9260\",\n        \"resourceVersion\": \"1993601\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9260/pods/e2e-test-httpd-pod\",\n        \"uid\": \"339b86b3-5654-4228-8e3d-4bbe390fdbac\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-9clwp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-9clwp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-9clwp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T01:18:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T01:18:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T01:18:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T01:18:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://c258805a3bdc0e630180ea7c8febac8507b24053a4dfb6b06a734dcb03533577\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-21T01:18:10Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.77\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.77\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-21T01:18:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 21 01:18:14.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9260'
Aug 21 01:18:16.424: INFO: stderr: ""
Aug 21 01:18:16.424: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 21 01:18:16.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9260'
Aug 21 01:18:19.936: INFO: stderr: ""
Aug 21 01:18:19.936: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:18:19.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9260" for this suite.

• [SLOW TEST:13.400 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":202,"skipped":3310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:18:19.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 01:18:22.577: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 01:18:24.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:18:26.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569502, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 01:18:29.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:18:29.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9244-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:18:30.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3844" for this suite.
STEP: Destroying namespace "webhook-3844-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.012 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":203,"skipped":3334,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:18:30.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3203
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3203
STEP: creating replication controller externalsvc in namespace services-3203
I0821 01:18:31.224519       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3203, replica count: 2
I0821 01:18:34.275812       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 01:18:37.276456       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 21 01:18:37.312: INFO: Creating new exec pod
Aug 21 01:18:41.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3203 execpod89tbr -- /bin/sh -x -c nslookup clusterip-service'
Aug 21 01:18:42.867: INFO: stderr: "I0821 01:18:42.740149    3530 log.go:172] (0x40004eea50) (0x40007421e0) Create stream\nI0821 01:18:42.746314    3530 log.go:172] (0x40004eea50) (0x40007421e0) Stream added, broadcasting: 1\nI0821 01:18:42.758395    3530 log.go:172] (0x40004eea50) Reply frame received for 1\nI0821 01:18:42.759063    3530 log.go:172] (0x40004eea50) (0x40007e2000) Create stream\nI0821 01:18:42.759128    3530 log.go:172] (0x40004eea50) (0x40007e2000) Stream added, broadcasting: 3\nI0821 01:18:42.760538    3530 log.go:172] (0x40004eea50) Reply frame received for 3\nI0821 01:18:42.760927    3530 log.go:172] (0x40004eea50) (0x40007e6000) Create stream\nI0821 01:18:42.761011    3530 log.go:172] (0x40004eea50) (0x40007e6000) Stream added, broadcasting: 5\nI0821 01:18:42.762236    3530 log.go:172] (0x40004eea50) Reply frame received for 5\nI0821 01:18:42.836367    3530 log.go:172] (0x40004eea50) Data frame received for 5\nI0821 01:18:42.836564    3530 log.go:172] (0x40007e6000) (5) Data frame handling\nI0821 01:18:42.836999    3530 log.go:172] (0x40007e6000) (5) Data frame sent\n+ nslookup clusterip-service\nI0821 01:18:42.844456    3530 log.go:172] (0x40004eea50) Data frame received for 3\nI0821 01:18:42.844548    3530 log.go:172] (0x40007e2000) (3) Data frame handling\nI0821 01:18:42.844629    3530 log.go:172] (0x40007e2000) (3) Data frame sent\nI0821 01:18:42.845557    3530 log.go:172] (0x40004eea50) Data frame received for 3\nI0821 01:18:42.845658    3530 log.go:172] (0x40007e2000) (3) Data frame handling\nI0821 01:18:42.845782    3530 log.go:172] (0x40007e2000) (3) Data frame sent\nI0821 01:18:42.846609    3530 log.go:172] (0x40004eea50) Data frame received for 3\nI0821 01:18:42.846895    3530 log.go:172] (0x40007e2000) (3) Data frame handling\nI0821 01:18:42.847157    3530 log.go:172] (0x40004eea50) Data frame received for 5\nI0821 01:18:42.847351    3530 log.go:172] (0x40007e6000) (5) Data frame handling\nI0821 01:18:42.848990    3530 log.go:172] (0x40004eea50) Data frame received for 1\nI0821 01:18:42.849101    3530 log.go:172] (0x40007421e0) (1) Data frame handling\nI0821 01:18:42.849187    3530 log.go:172] (0x40007421e0) (1) Data frame sent\nI0821 01:18:42.850371    3530 log.go:172] (0x40004eea50) (0x40007421e0) Stream removed, broadcasting: 1\nI0821 01:18:42.854783    3530 log.go:172] (0x40004eea50) Go away received\nI0821 01:18:42.857533    3530 log.go:172] (0x40004eea50) (0x40007421e0) Stream removed, broadcasting: 1\nI0821 01:18:42.858089    3530 log.go:172] (0x40004eea50) (0x40007e2000) Stream removed, broadcasting: 3\nI0821 01:18:42.858320    3530 log.go:172] (0x40004eea50) (0x40007e6000) Stream removed, broadcasting: 5\n"
Aug 21 01:18:42.868: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3203.svc.cluster.local\tcanonical name = externalsvc.services-3203.svc.cluster.local.\nName:\texternalsvc.services-3203.svc.cluster.local\nAddress: 10.96.223.111\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3203, will wait for the garbage collector to delete the pods
Aug 21 01:18:42.934: INFO: Deleting ReplicationController externalsvc took: 8.70591ms
Aug 21 01:18:43.034: INFO: Terminating ReplicationController externalsvc pods took: 100.642241ms
Aug 21 01:18:51.811: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:18:51.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3203" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.901 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":204,"skipped":3344,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:18:51.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 21 01:18:51.962: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:19:00.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9817" for this suite.

• [SLOW TEST:8.131 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":205,"skipped":3356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:19:00.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-fa798c0d-2128-4e1a-940a-ea3d655631c4
STEP: Creating a pod to test consume configMaps
Aug 21 01:19:00.143: INFO: Waiting up to 5m0s for pod "pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10" in namespace "configmap-3428" to be "success or failure"
Aug 21 01:19:00.153: INFO: Pod "pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10": Phase="Pending", Reason="", readiness=false. Elapsed: 9.515121ms
Aug 21 01:19:02.160: INFO: Pod "pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016645182s
Aug 21 01:19:04.169: INFO: Pod "pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025314902s
STEP: Saw pod success
Aug 21 01:19:04.169: INFO: Pod "pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10" satisfied condition "success or failure"
Aug 21 01:19:04.175: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10 container configmap-volume-test: 
STEP: delete the pod
Aug 21 01:19:04.202: INFO: Waiting for pod pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10 to disappear
Aug 21 01:19:04.207: INFO: Pod pod-configmaps-71d809d1-d5bc-4351-9c06-72dfbc865e10 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:19:04.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3428" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3391,"failed":0}

------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:19:04.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:19:04.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:19:10.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3037" for this suite.

• [SLOW TEST:6.154 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:19:10.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-346
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-346
STEP: Deleting pre-stop pod
Aug 21 01:19:23.555: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:19:23.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-346" for this suite.

• [SLOW TEST:13.221 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":208,"skipped":3439,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:19:23.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:19:39.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8597" for this suite.

• [SLOW TEST:16.181 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":209,"skipped":3441,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:19:39.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-sndj
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 01:19:40.006: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sndj" in namespace "subpath-3217" to be "success or failure"
Aug 21 01:19:40.040: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.722666ms
Aug 21 01:19:42.257: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251156677s
Aug 21 01:19:44.263: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 4.257014736s
Aug 21 01:19:46.275: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 6.26913331s
Aug 21 01:19:48.282: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 8.275934507s
Aug 21 01:19:50.289: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 10.282887528s
Aug 21 01:19:52.295: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 12.2892395s
Aug 21 01:19:54.322: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 14.315705643s
Aug 21 01:19:56.328: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 16.321719105s
Aug 21 01:19:58.335: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 18.328945707s
Aug 21 01:20:00.346: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 20.33999202s
Aug 21 01:20:02.353: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Running", Reason="", readiness=true. Elapsed: 22.347235549s
Aug 21 01:20:04.425: INFO: Pod "pod-subpath-test-configmap-sndj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.419433427s
STEP: Saw pod success
Aug 21 01:20:04.426: INFO: Pod "pod-subpath-test-configmap-sndj" satisfied condition "success or failure"
Aug 21 01:20:04.430: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-sndj container test-container-subpath-configmap-sndj: 
STEP: delete the pod
Aug 21 01:20:04.501: INFO: Waiting for pod pod-subpath-test-configmap-sndj to disappear
Aug 21 01:20:04.567: INFO: Pod pod-subpath-test-configmap-sndj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-sndj
Aug 21 01:20:04.568: INFO: Deleting pod "pod-subpath-test-configmap-sndj" in namespace "subpath-3217"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:20:04.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3217" for this suite.

• [SLOW TEST:24.792 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":210,"skipped":3505,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:20:04.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:20:04.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3" in namespace "projected-7005" to be "success or failure"
Aug 21 01:20:04.701: INFO: Pod "downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.237134ms
Aug 21 01:20:06.707: INFO: Pod "downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016886362s
Aug 21 01:20:08.715: INFO: Pod "downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024213914s
STEP: Saw pod success
Aug 21 01:20:08.715: INFO: Pod "downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3" satisfied condition "success or failure"
Aug 21 01:20:08.721: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3 container client-container: 
STEP: delete the pod
Aug 21 01:20:08.759: INFO: Waiting for pod downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3 to disappear
Aug 21 01:20:08.770: INFO: Pod downwardapi-volume-949db4fd-9dd2-40a2-81d4-220f1728f4e3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:20:08.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7005" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3512,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:20:08.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:20:08.890: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 21 01:20:13.896: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 01:20:13.897: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 21 01:20:15.907: INFO: Creating deployment "test-rollover-deployment"
Aug 21 01:20:15.934: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 21 01:20:17.952: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 21 01:20:17.965: INFO: Ensure that both replica sets have 1 created replica
Aug 21 01:20:17.977: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 21 01:20:17.994: INFO: Updating deployment test-rollover-deployment
Aug 21 01:20:17.994: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 21 01:20:20.006: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 21 01:20:20.018: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 21 01:20:20.031: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:20.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569618, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:22.048: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:22.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569621, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:24.048: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:24.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569621, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:26.047: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:26.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569621, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:28.049: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:28.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569621, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:30.048: INFO: all replica sets need to contain the pod-template-hash label
Aug 21 01:20:30.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569621, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569615, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:20:32.049: INFO: 
Aug 21 01:20:32.049: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 21 01:20:32.064: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6222 /apis/apps/v1/namespaces/deployment-6222/deployments/test-rollover-deployment a6adb4b8-5f25-4203-b7a8-08eb38c4b0a0 1994567 2 2020-08-21 01:20:15 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004a5df28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 01:20:15 +0000 UTC,LastTransitionTime:2020-08-21 01:20:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-21 01:20:31 +0000 UTC,LastTransitionTime:2020-08-21 01:20:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 01:20:32.074: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-6222 /apis/apps/v1/namespaces/deployment-6222/replicasets/test-rollover-deployment-574d6dfbff 145fe4db-2b23-4495-b8d6-3de4bb56eb4f 1994557 2 2020-08-21 01:20:17 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a6adb4b8-5f25-4203-b7a8-08eb38c4b0a0 0x4004acc387 0x4004acc388}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004acc3f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:20:32.075: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 21 01:20:32.075: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6222 /apis/apps/v1/namespaces/deployment-6222/replicasets/test-rollover-controller ecd333a6-7c97-4ff3-af23-b16984b981a9 1994566 2 2020-08-21 01:20:08 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a6adb4b8-5f25-4203-b7a8-08eb38c4b0a0 0x4004acc29f 0x4004acc2b0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004acc318  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:20:32.076: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-6222 /apis/apps/v1/namespaces/deployment-6222/replicasets/test-rollover-deployment-f6c94f66c 3c76c309-5603-41a8-9411-7576888110a8 1994509 2 2020-08-21 01:20:15 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a6adb4b8-5f25-4203-b7a8-08eb38c4b0a0 0x4004acc460 0x4004acc461}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004acc4d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:20:32.083: INFO: Pod "test-rollover-deployment-574d6dfbff-rqnpl" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-rqnpl test-rollover-deployment-574d6dfbff- deployment-6222 /api/v1/namespaces/deployment-6222/pods/test-rollover-deployment-574d6dfbff-rqnpl 7b5c11f6-ff38-45ca-8e2e-99b7e0c574d4 1994525 0 2020-08-21 01:20:18 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 145fe4db-2b23-4495-b8d6-3de4bb56eb4f 0x4004a90cc7 0x4004a90cc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-27kgr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-27kgr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-27kgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:20:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:20:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:20:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.96,StartTime:2020-08-21 01:20:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 01:20:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://96c1913468f7565a61a8a3149ef31e4bed285474ec6ec41535424cc0b64c3b89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:20:32.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6222" for this suite.

• [SLOW TEST:23.293 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":212,"skipped":3537,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:20:32.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:20:32.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac" in namespace "projected-9527" to be "success or failure"
Aug 21 01:20:32.268: INFO: Pod "downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac": Phase="Pending", Reason="", readiness=false. Elapsed: 32.613146ms
Aug 21 01:20:34.274: INFO: Pod "downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038550721s
Aug 21 01:20:36.279: INFO: Pod "downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043693392s
STEP: Saw pod success
Aug 21 01:20:36.279: INFO: Pod "downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac" satisfied condition "success or failure"
Aug 21 01:20:36.282: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac container client-container: 
STEP: delete the pod
Aug 21 01:20:36.410: INFO: Waiting for pod downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac to disappear
Aug 21 01:20:36.640: INFO: Pod downwardapi-volume-d9599f69-204d-413d-914b-bc0baddb0cac no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:20:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9527" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3541,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:20:36.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-eae77548-1397-4c55-af76-db3e8077da2b
STEP: Creating a pod to test consume configMaps
Aug 21 01:20:37.007: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf" in namespace "projected-5877" to be "success or failure"
Aug 21 01:20:37.072: INFO: Pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 64.532088ms
Aug 21 01:20:39.078: INFO: Pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070359947s
Aug 21 01:20:41.083: INFO: Pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf": Phase="Running", Reason="", readiness=true. Elapsed: 4.076262141s
Aug 21 01:20:43.090: INFO: Pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083292801s
STEP: Saw pod success
Aug 21 01:20:43.091: INFO: Pod "pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf" satisfied condition "success or failure"
Aug 21 01:20:43.095: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 01:20:43.145: INFO: Waiting for pod pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf to disappear
Aug 21 01:20:43.162: INFO: Pod pod-projected-configmaps-4d27b279-ba9f-4a91-995d-52ee88ba9ccf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:20:43.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5877" for this suite.

• [SLOW TEST:6.527 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3542,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:20:43.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 21 01:20:43.327: INFO: PodSpec: initContainers in spec.initContainers
Aug 21 01:21:31.363: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-21899950-63f2-413a-989e-7c5d46c87a09", GenerateName:"", Namespace:"init-container-8504", SelfLink:"/api/v1/namespaces/init-container-8504/pods/pod-init-21899950-63f2-413a-989e-7c5d46c87a09", UID:"102daa3c-4ebf-413c-a9a0-9a0b05aff697", ResourceVersion:"1994857", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733569643, loc:(*time.Location)(0x726af60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"326602436"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jcswp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x400276f940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jcswp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jcswp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jcswp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4004bbb888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002c09bc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4004bbb920)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4004bbb940)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4004bbb948), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4004bbb94c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569643, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569643, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569643, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569643, loc:(*time.Location)(0x726af60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.88", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.88"}}, StartTime:(*v1.Time)(0x4002bca100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400049fc70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400049fe30)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2e9d6dd9942ef1153d437878dc7c130cf294a1c52172026e2c2067134fd2a5d9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4002bca140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4002bca120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x4004bbb9cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:21:31.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8504" for this suite.

• [SLOW TEST:48.264 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":215,"skipped":3545,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:21:31.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7122
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-7122
Aug 21 01:21:31.563: INFO: Found 0 stateful pods, waiting for 1
Aug 21 01:21:41.569: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 01:21:41.632: INFO: Deleting all statefulset in ns statefulset-7122
Aug 21 01:21:41.676: INFO: Scaling statefulset ss to 0
Aug 21 01:21:51.772: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:21:51.777: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:21:51.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7122" for this suite.

• [SLOW TEST:20.362 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":216,"skipped":3552,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:21:51.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 21 01:21:51.910: INFO: namespace kubectl-2006
Aug 21 01:21:51.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2006'
Aug 21 01:21:53.595: INFO: stderr: ""
Aug 21 01:21:53.595: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 01:21:54.611: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 01:21:54.612: INFO: Found 0 / 1
Aug 21 01:21:55.615: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 01:21:55.615: INFO: Found 0 / 1
Aug 21 01:21:56.604: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 01:21:56.604: INFO: Found 0 / 1
Aug 21 01:21:57.604: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 01:21:57.604: INFO: Found 1 / 1
Aug 21 01:21:57.605: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 21 01:21:57.611: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 01:21:57.611: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 01:21:57.611: INFO: wait on agnhost-master startup in kubectl-2006 
Aug 21 01:21:57.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-jn6xd agnhost-master --namespace=kubectl-2006'
Aug 21 01:21:58.916: INFO: stderr: ""
Aug 21 01:21:58.917: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 21 01:21:58.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2006'
Aug 21 01:22:00.274: INFO: stderr: ""
Aug 21 01:22:00.274: INFO: stdout: "service/rm2 exposed\n"
Aug 21 01:22:00.278: INFO: Service rm2 in namespace kubectl-2006 found.
STEP: exposing service
Aug 21 01:22:02.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2006'
Aug 21 01:22:03.602: INFO: stderr: ""
Aug 21 01:22:03.602: INFO: stdout: "service/rm3 exposed\n"
Aug 21 01:22:03.624: INFO: Service rm3 in namespace kubectl-2006 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:05.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2006" for this suite.

• [SLOW TEST:13.842 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":217,"skipped":3560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:05.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 21 01:22:05.817: INFO: Waiting up to 5m0s for pod "var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4" in namespace "var-expansion-4523" to be "success or failure"
Aug 21 01:22:05.849: INFO: Pod "var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.50641ms
Aug 21 01:22:07.855: INFO: Pod "var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037604296s
Aug 21 01:22:09.862: INFO: Pod "var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044842751s
STEP: Saw pod success
Aug 21 01:22:09.863: INFO: Pod "var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4" satisfied condition "success or failure"
Aug 21 01:22:09.868: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4 container dapi-container: 
STEP: delete the pod
Aug 21 01:22:10.081: INFO: Waiting for pod var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4 to disappear
Aug 21 01:22:10.117: INFO: Pod var-expansion-615b9ea3-e86c-4473-8144-1b378b6bf8f4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:10.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4523" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3618,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:10.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6968" for this suite.
STEP: Destroying namespace "nsdeletetest-2807" for this suite.
Aug 21 01:22:25.550: INFO: Namespace nsdeletetest-2807 was already deleted
STEP: Destroying namespace "nsdeletetest-4377" for this suite.

• [SLOW TEST:15.421 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":219,"skipped":3626,"failed":0}
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:25.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:25.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4258" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":220,"skipped":3633,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:25.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:22:25.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b" in namespace "projected-3888" to be "success or failure"
Aug 21 01:22:25.897: INFO: Pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.529419ms
Aug 21 01:22:27.903: INFO: Pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058190373s
Aug 21 01:22:29.911: INFO: Pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b": Phase="Running", Reason="", readiness=true. Elapsed: 4.066011371s
Aug 21 01:22:31.918: INFO: Pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072956881s
STEP: Saw pod success
Aug 21 01:22:31.918: INFO: Pod "downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b" satisfied condition "success or failure"
Aug 21 01:22:31.924: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b container client-container: 
STEP: delete the pod
Aug 21 01:22:32.004: INFO: Waiting for pod downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b to disappear
Aug 21 01:22:32.016: INFO: Pod downwardapi-volume-564f9882-835d-4b4a-bba0-474e4729a76b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:32.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3888" for this suite.

• [SLOW TEST:6.308 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3636,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:32.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 01:22:35.448: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 01:22:37.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569755, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569755, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569755, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569755, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 01:22:40.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:22:40.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7393" for this suite.
STEP: Destroying namespace "webhook-7393-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.926 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":222,"skipped":3636,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:22:40.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 21 01:22:41.010: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 01:23:00.221: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:24:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4875" for this suite.

• [SLOW TEST:96.458 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":223,"skipped":3733,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:24:17.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:24:17.509: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 21 01:24:18.555: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:24:18.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1639" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":224,"skipped":3758,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:24:18.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 21 01:24:18.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8393 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 21 01:24:27.890: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0821 01:24:27.748010    3651 log.go:172] (0x4000119080) (0x40008fa3c0) Create stream\nI0821 01:24:27.750486    3651 log.go:172] (0x4000119080) (0x40008fa3c0) Stream added, broadcasting: 1\nI0821 01:24:27.770477    3651 log.go:172] (0x4000119080) Reply frame received for 1\nI0821 01:24:27.771082    3651 log.go:172] (0x4000119080) (0x4000803c20) Create stream\nI0821 01:24:27.771173    3651 log.go:172] (0x4000119080) (0x4000803c20) Stream added, broadcasting: 3\nI0821 01:24:27.773214    3651 log.go:172] (0x4000119080) Reply frame received for 3\nI0821 01:24:27.773611    3651 log.go:172] (0x4000119080) (0x40008fa460) Create stream\nI0821 01:24:27.773689    3651 log.go:172] (0x4000119080) (0x40008fa460) Stream added, broadcasting: 5\nI0821 01:24:27.775639    3651 log.go:172] (0x4000119080) Reply frame received for 5\nI0821 01:24:27.775926    3651 log.go:172] (0x4000119080) (0x4000803cc0) Create stream\nI0821 01:24:27.775986    3651 log.go:172] (0x4000119080) (0x4000803cc0) Stream added, broadcasting: 7\nI0821 01:24:27.777351    3651 log.go:172] (0x4000119080) Reply frame received for 7\nI0821 01:24:27.779666    3651 log.go:172] (0x4000803c20) (3) Writing data frame\nI0821 01:24:27.781211    3651 log.go:172] (0x4000803c20) (3) Writing data frame\nI0821 01:24:27.781637    3651 log.go:172] (0x4000119080) Data frame received for 5\nI0821 01:24:27.781802    3651 log.go:172] (0x40008fa460) (5) Data frame handling\nI0821 01:24:27.782070    3651 log.go:172] (0x40008fa460) (5) Data frame sent\nI0821 01:24:27.782590    3651 log.go:172] (0x4000119080) Data frame received for 5\nI0821 01:24:27.782674    3651 log.go:172] (0x40008fa460) (5) Data frame handling\nI0821 01:24:27.782738    3651 log.go:172] (0x40008fa460) (5) Data frame sent\nI0821 01:24:27.806462    3651 log.go:172] (0x4000119080) Data frame received for 7\nI0821 01:24:27.806854    3651 log.go:172] (0x4000803cc0) (7) Data frame handling\nI0821 01:24:27.807102    3651 log.go:172] (0x4000119080) Data frame received for 5\nI0821 01:24:27.807207    3651 log.go:172] (0x40008fa460) (5) Data frame handling\nI0821 01:24:27.807372    3651 log.go:172] (0x4000119080) Data frame received for 1\nI0821 01:24:27.807494    3651 log.go:172] (0x40008fa3c0) (1) Data frame handling\nI0821 01:24:27.807600    3651 log.go:172] (0x40008fa3c0) (1) Data frame sent\nI0821 01:24:27.809744    3651 log.go:172] (0x4000119080) (0x40008fa3c0) Stream removed, broadcasting: 1\nI0821 01:24:27.810518    3651 log.go:172] (0x4000119080) (0x4000803c20) Stream removed, broadcasting: 3\nI0821 01:24:27.812387    3651 log.go:172] (0x4000119080) Go away received\nI0821 01:24:27.815612    3651 log.go:172] (0x4000119080) (0x40008fa3c0) Stream removed, broadcasting: 1\nI0821 01:24:27.816058    3651 log.go:172] (0x4000119080) (0x4000803c20) Stream removed, broadcasting: 3\nI0821 01:24:27.816183    3651 log.go:172] (0x4000119080) (0x40008fa460) Stream removed, broadcasting: 5\nI0821 01:24:27.816554    3651 log.go:172] (0x4000119080) (0x4000803cc0) Stream removed, broadcasting: 7\n"
Aug 21 01:24:27.891: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:24:29.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8393" for this suite.

• [SLOW TEST:11.219 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":225,"skipped":3769,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:24:29.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:24:52.051: INFO: Container started at 2020-08-21 01:24:32 +0000 UTC, pod became ready at 2020-08-21 01:24:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:24:52.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7084" for this suite.

• [SLOW TEST:22.141 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3807,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:24:52.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 21 01:24:52.161: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 21 01:24:54.593: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 21 01:24:57.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:24:59.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733569894, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 01:25:01.845: INFO: Waited 529.755246ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:25:02.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3593" for this suite.

• [SLOW TEST:10.315 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":227,"skipped":3817,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:25:02.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 21 01:25:02.826: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 01:25:02.908: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 01:25:02.912: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 21 01:25:02.937: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.937: INFO: 	Container app ready: true, restart count 0
Aug 21 01:25:02.937: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.937: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 01:25:02.937: INFO: test-webserver-7a72893b-cfd8-4eb7-a752-c324ae16ff63 from container-probe-7084 started at 2020-08-21 01:24:30 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.937: INFO: 	Container test-webserver ready: false, restart count 0
Aug 21 01:25:02.937: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.937: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 01:25:02.937: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 21 01:25:02.965: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.965: INFO: 	Container app ready: true, restart count 0
Aug 21 01:25:02.965: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.966: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 01:25:02.966: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 01:25:02.966: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f7fa0383-3a75-4cfe-965f-0707ef7860a5 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-f7fa0383-3a75-4cfe-965f-0707ef7860a5 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f7fa0383-3a75-4cfe-965f-0707ef7860a5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:30:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4839" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:309.033 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":228,"skipped":3823,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:30:11.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8dabafc1-b132-4b3b-9446-587a19b4b0d2
STEP: Creating a pod to test consume configMaps
Aug 21 01:30:11.492: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571" in namespace "projected-4674" to be "success or failure"
Aug 21 01:30:11.509: INFO: Pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571": Phase="Pending", Reason="", readiness=false. Elapsed: 16.280828ms
Aug 21 01:30:13.517: INFO: Pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024061442s
Aug 21 01:30:15.523: INFO: Pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571": Phase="Running", Reason="", readiness=true. Elapsed: 4.030650463s
Aug 21 01:30:17.529: INFO: Pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036224664s
STEP: Saw pod success
Aug 21 01:30:17.529: INFO: Pod "pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571" satisfied condition "success or failure"
Aug 21 01:30:17.533: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 01:30:17.583: INFO: Waiting for pod pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571 to disappear
Aug 21 01:30:17.672: INFO: Pod pod-projected-configmaps-aff10cbf-7570-4f15-add1-51d3a0bf1571 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:30:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4674" for this suite.

• [SLOW TEST:6.264 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3838,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:30:17.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1476
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 21 01:30:17.910: INFO: Found 0 stateful pods, waiting for 3
Aug 21 01:30:27.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:30:27.919: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:30:27.919: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 21 01:30:37.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:30:37.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:30:37.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:30:37.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:30:39.360: INFO: stderr: "I0821 01:30:39.247509    3692 log.go:172] (0x4000aa4000) (0x4000821ae0) Create stream\nI0821 01:30:39.250045    3692 log.go:172] (0x4000aa4000) (0x4000821ae0) Stream added, broadcasting: 1\nI0821 01:30:39.259165    3692 log.go:172] (0x4000aa4000) Reply frame received for 1\nI0821 01:30:39.259964    3692 log.go:172] (0x4000aa4000) (0x4000a4e000) Create stream\nI0821 01:30:39.260047    3692 log.go:172] (0x4000aa4000) (0x4000a4e000) Stream added, broadcasting: 3\nI0821 01:30:39.261665    3692 log.go:172] (0x4000aa4000) Reply frame received for 3\nI0821 01:30:39.262030    3692 log.go:172] (0x4000aa4000) (0x4000a4e0a0) Create stream\nI0821 01:30:39.262101    3692 log.go:172] (0x4000aa4000) (0x4000a4e0a0) Stream added, broadcasting: 5\nI0821 01:30:39.263511    3692 log.go:172] (0x4000aa4000) Reply frame received for 5\nI0821 01:30:39.315888    3692 log.go:172] (0x4000aa4000) Data frame received for 5\nI0821 01:30:39.316171    3692 log.go:172] (0x4000a4e0a0) (5) Data frame handling\nI0821 01:30:39.316978    3692 log.go:172] (0x4000a4e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:30:39.338559    3692 log.go:172] (0x4000aa4000) Data frame received for 3\nI0821 01:30:39.338780    3692 log.go:172] (0x4000a4e000) (3) Data frame handling\nI0821 01:30:39.338932    3692 log.go:172] (0x4000a4e000) (3) Data frame sent\nI0821 01:30:39.339957    3692 log.go:172] (0x4000aa4000) Data frame received for 3\nI0821 01:30:39.340171    3692 log.go:172] (0x4000aa4000) Data frame received for 5\nI0821 01:30:39.340440    3692 log.go:172] (0x4000a4e000) (3) Data frame handling\nI0821 01:30:39.340704    3692 log.go:172] (0x4000a4e0a0) (5) Data frame handling\nI0821 01:30:39.341214    3692 log.go:172] (0x4000aa4000) Data frame received for 1\nI0821 01:30:39.341290    3692 log.go:172] (0x4000821ae0) (1) Data frame handling\nI0821 01:30:39.341365    3692 log.go:172] (0x4000821ae0) (1) Data frame sent\nI0821 01:30:39.343859    3692 log.go:172] (0x4000aa4000) (0x4000821ae0) Stream removed, broadcasting: 1\nI0821 01:30:39.345953    3692 log.go:172] (0x4000aa4000) Go away received\nI0821 01:30:39.349877    3692 log.go:172] (0x4000aa4000) (0x4000821ae0) Stream removed, broadcasting: 1\nI0821 01:30:39.350183    3692 log.go:172] (0x4000aa4000) (0x4000a4e000) Stream removed, broadcasting: 3\nI0821 01:30:39.350386    3692 log.go:172] (0x4000aa4000) (0x4000a4e0a0) Stream removed, broadcasting: 5\n"
Aug 21 01:30:39.361: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:30:39.361: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 21 01:30:49.441: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 21 01:30:59.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:31:00.957: INFO: stderr: "I0821 01:31:00.827476    3715 log.go:172] (0x4000ab20b0) (0x4000a9c000) Create stream\nI0821 01:31:00.834686    3715 log.go:172] (0x4000ab20b0) (0x4000a9c000) Stream added, broadcasting: 1\nI0821 01:31:00.851313    3715 log.go:172] (0x4000ab20b0) Reply frame received for 1\nI0821 01:31:00.852663    3715 log.go:172] (0x4000ab20b0) (0x40007d39a0) Create stream\nI0821 01:31:00.852861    3715 log.go:172] (0x4000ab20b0) (0x40007d39a0) Stream added, broadcasting: 3\nI0821 01:31:00.855015    3715 log.go:172] (0x4000ab20b0) Reply frame received for 3\nI0821 01:31:00.855614    3715 log.go:172] (0x4000ab20b0) (0x4000a9c0a0) Create stream\nI0821 01:31:00.855731    3715 log.go:172] (0x4000ab20b0) (0x4000a9c0a0) Stream added, broadcasting: 5\nI0821 01:31:00.857811    3715 log.go:172] (0x4000ab20b0) Reply frame received for 5\nI0821 01:31:00.933396    3715 log.go:172] (0x4000ab20b0) Data frame received for 3\nI0821 01:31:00.933754    3715 log.go:172] (0x4000ab20b0) Data frame received for 1\nI0821 01:31:00.934027    3715 log.go:172] (0x4000ab20b0) Data frame received for 5\nI0821 01:31:00.934193    3715 log.go:172] (0x4000a9c000) (1) Data frame handling\nI0821 01:31:00.934290    3715 log.go:172] (0x4000a9c0a0) (5) Data frame handling\nI0821 01:31:00.934436    3715 log.go:172] (0x40007d39a0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 01:31:00.935982    3715 log.go:172] (0x4000a9c0a0) (5) Data frame sent\nI0821 01:31:00.936187    3715 log.go:172] (0x40007d39a0) (3) Data frame sent\nI0821 01:31:00.936365    3715 log.go:172] (0x4000ab20b0) Data frame received for 5\nI0821 01:31:00.936451    3715 log.go:172] (0x4000a9c0a0) (5) Data frame handling\nI0821 01:31:00.936690    3715 log.go:172] (0x4000ab20b0) Data frame received for 3\nI0821 01:31:00.936885    3715 log.go:172] (0x40007d39a0) (3) Data frame handling\nI0821 01:31:00.937137    3715 log.go:172] (0x4000a9c000) (1) Data frame sent\nI0821 01:31:00.938821    3715 log.go:172] (0x4000ab20b0) (0x4000a9c000) Stream removed, broadcasting: 1\nI0821 01:31:00.941118    3715 log.go:172] (0x4000ab20b0) Go away received\nI0821 01:31:00.944494    3715 log.go:172] (0x4000ab20b0) (0x4000a9c000) Stream removed, broadcasting: 1\nI0821 01:31:00.945105    3715 log.go:172] (0x4000ab20b0) (0x40007d39a0) Stream removed, broadcasting: 3\nI0821 01:31:00.945451    3715 log.go:172] (0x4000ab20b0) (0x4000a9c0a0) Stream removed, broadcasting: 5\n"
Aug 21 01:31:00.958: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 01:31:00.959: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 01:31:11.001: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update
Aug 21 01:31:11.002: INFO: Waiting for Pod statefulset-1476/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:31:11.002: INFO: Waiting for Pod statefulset-1476/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:31:11.002: INFO: Waiting for Pod statefulset-1476/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:31:21.016: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update
Aug 21 01:31:21.017: INFO: Waiting for Pod statefulset-1476/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:31:21.017: INFO: Waiting for Pod statefulset-1476/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:31:31.016: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 21 01:31:41.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:31:42.537: INFO: stderr: "I0821 01:31:42.394871    3737 log.go:172] (0x4000a98000) (0x40006dc820) Create stream\nI0821 01:31:42.399123    3737 log.go:172] (0x4000a98000) (0x40006dc820) Stream added, broadcasting: 1\nI0821 01:31:42.409075    3737 log.go:172] (0x4000a98000) Reply frame received for 1\nI0821 01:31:42.409690    3737 log.go:172] (0x4000a98000) (0x40005555e0) Create stream\nI0821 01:31:42.409755    3737 log.go:172] (0x4000a98000) (0x40005555e0) Stream added, broadcasting: 3\nI0821 01:31:42.411588    3737 log.go:172] (0x4000a98000) Reply frame received for 3\nI0821 01:31:42.412106    3737 log.go:172] (0x4000a98000) (0x400095e000) Create stream\nI0821 01:31:42.412218    3737 log.go:172] (0x4000a98000) (0x400095e000) Stream added, broadcasting: 5\nI0821 01:31:42.414091    3737 log.go:172] (0x4000a98000) Reply frame received for 5\nI0821 01:31:42.488679    3737 log.go:172] (0x4000a98000) Data frame received for 5\nI0821 01:31:42.488987    3737 log.go:172] (0x400095e000) (5) Data frame handling\nI0821 01:31:42.489393    3737 log.go:172] (0x400095e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:31:42.517975    3737 log.go:172] (0x4000a98000) Data frame received for 3\nI0821 01:31:42.518234    3737 log.go:172] (0x40005555e0) (3) Data frame handling\nI0821 01:31:42.518386    3737 log.go:172] (0x40005555e0) (3) Data frame sent\nI0821 01:31:42.518523    3737 log.go:172] (0x4000a98000) Data frame received for 3\nI0821 01:31:42.518663    3737 log.go:172] (0x40005555e0) (3) Data frame handling\nI0821 01:31:42.518840    3737 log.go:172] (0x4000a98000) Data frame received for 5\nI0821 01:31:42.519013    3737 log.go:172] (0x400095e000) (5) Data frame handling\nI0821 01:31:42.520330    3737 log.go:172] (0x4000a98000) Data frame received for 1\nI0821 01:31:42.520480    3737 log.go:172] (0x40006dc820) (1) Data frame handling\nI0821 01:31:42.520616    3737 log.go:172] (0x40006dc820) (1) Data frame sent\nI0821 01:31:42.521959    3737 log.go:172] (0x4000a98000) (0x40006dc820) Stream removed, broadcasting: 1\nI0821 01:31:42.524226    3737 log.go:172] (0x4000a98000) Go away received\nI0821 01:31:42.528286    3737 log.go:172] (0x4000a98000) (0x40006dc820) Stream removed, broadcasting: 1\nI0821 01:31:42.528535    3737 log.go:172] (0x4000a98000) (0x40005555e0) Stream removed, broadcasting: 3\nI0821 01:31:42.528708    3737 log.go:172] (0x4000a98000) (0x400095e000) Stream removed, broadcasting: 5\n"
Aug 21 01:31:42.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:31:42.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 01:31:52.588: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 21 01:32:02.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:32:04.054: INFO: stderr: "I0821 01:32:03.947341    3759 log.go:172] (0x4000112370) (0x400056d4a0) Create stream\nI0821 01:32:03.951066    3759 log.go:172] (0x4000112370) (0x400056d4a0) Stream added, broadcasting: 1\nI0821 01:32:03.964276    3759 log.go:172] (0x4000112370) Reply frame received for 1\nI0821 01:32:03.965465    3759 log.go:172] (0x4000112370) (0x4000a4c000) Create stream\nI0821 01:32:03.965578    3759 log.go:172] (0x4000112370) (0x4000a4c000) Stream added, broadcasting: 3\nI0821 01:32:03.967889    3759 log.go:172] (0x4000112370) Reply frame received for 3\nI0821 01:32:03.968454    3759 log.go:172] (0x4000112370) (0x4000a60000) Create stream\nI0821 01:32:03.968619    3759 log.go:172] (0x4000112370) (0x4000a60000) Stream added, broadcasting: 5\nI0821 01:32:03.971020    3759 log.go:172] (0x4000112370) Reply frame received for 5\nI0821 01:32:04.032814    3759 log.go:172] (0x4000112370) Data frame received for 3\nI0821 01:32:04.033089    3759 log.go:172] (0x4000112370) Data frame received for 5\nI0821 01:32:04.033263    3759 log.go:172] (0x4000a4c000) (3) Data frame handling\nI0821 01:32:04.033475    3759 log.go:172] (0x4000a60000) (5) Data frame handling\nI0821 01:32:04.033746    3759 log.go:172] (0x4000112370) Data frame received for 1\nI0821 01:32:04.033897    3759 log.go:172] (0x400056d4a0) (1) Data frame handling\nI0821 01:32:04.035480    3759 log.go:172] (0x4000a4c000) (3) Data frame sent\nI0821 01:32:04.035575    3759 log.go:172] (0x4000a60000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 01:32:04.035975    3759 log.go:172] (0x400056d4a0) (1) Data frame sent\nI0821 01:32:04.036175    3759 log.go:172] (0x4000112370) Data frame received for 3\nI0821 01:32:04.036265    3759 log.go:172] (0x4000a4c000) (3) Data frame handling\nI0821 01:32:04.037895    3759 log.go:172] (0x4000112370) Data frame received for 5\nI0821 01:32:04.038049    3759 log.go:172] (0x4000a60000) (5) Data frame handling\nI0821 01:32:04.038937    3759 log.go:172] (0x4000112370) (0x400056d4a0) Stream removed, broadcasting: 1\nI0821 01:32:04.039489    3759 log.go:172] (0x4000112370) Go away received\nI0821 01:32:04.042841    3759 log.go:172] (0x4000112370) (0x400056d4a0) Stream removed, broadcasting: 1\nI0821 01:32:04.043176    3759 log.go:172] (0x4000112370) (0x4000a4c000) Stream removed, broadcasting: 3\nI0821 01:32:04.043409    3759 log.go:172] (0x4000112370) (0x4000a60000) Stream removed, broadcasting: 5\n"
Aug 21 01:32:04.055: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 01:32:04.055: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 01:32:34.091: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 01:32:44.109: INFO: Deleting all statefulset in ns statefulset-1476
Aug 21 01:32:44.127: INFO: Scaling statefulset ss2 to 0
Aug 21 01:33:14.151: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:33:14.156: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:33:14.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1476" for this suite.

• [SLOW TEST:176.501 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":230,"skipped":3848,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:33:14.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 21 01:33:14.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9203 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 21 01:33:15.638: INFO: stderr: ""
Aug 21 01:33:15.639: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 21 01:33:15.639: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 21 01:33:15.640: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9203" to be "running and ready, or succeeded"
Aug 21 01:33:15.645: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.709002ms
Aug 21 01:33:17.652: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012697572s
Aug 21 01:33:19.662: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.022417261s
Aug 21 01:33:19.662: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 21 01:33:19.662: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 21 01:33:19.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203'
Aug 21 01:33:20.967: INFO: stderr: ""
Aug 21 01:33:20.967: INFO: stdout: "I0821 01:33:18.136187       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gx9 246\nI0821 01:33:18.336317       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/k5v 515\nI0821 01:33:18.536374       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/v5ds 475\nI0821 01:33:18.736375       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/9crd 295\nI0821 01:33:18.936375       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/vgg 386\nI0821 01:33:19.136403       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/sd5r 221\nI0821 01:33:19.336361       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/79ts 419\nI0821 01:33:19.536332       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/7nbx 554\nI0821 01:33:19.736317       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4s4 222\nI0821 01:33:19.936310       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/p27 567\nI0821 01:33:20.136419       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/6rkr 555\nI0821 01:33:20.336302       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/74f 291\nI0821 01:33:20.536314       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/lkdf 577\nI0821 01:33:20.736309       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/nff2 408\nI0821 01:33:20.936356       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/7qb 290\n"
STEP: limiting log lines
Aug 21 01:33:20.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203 --tail=1'
Aug 21 01:33:22.303: INFO: stderr: ""
Aug 21 01:33:22.303: INFO: stdout: "I0821 01:33:22.136373       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/ltn8 448\n"
Aug 21 01:33:22.303: INFO: got output "I0821 01:33:22.136373       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/ltn8 448\n"
STEP: limiting log bytes
Aug 21 01:33:22.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203 --limit-bytes=1'
Aug 21 01:33:23.583: INFO: stderr: ""
Aug 21 01:33:23.583: INFO: stdout: "I"
Aug 21 01:33:23.583: INFO: got output "I"
STEP: exposing timestamps
Aug 21 01:33:23.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203 --tail=1 --timestamps'
Aug 21 01:33:24.887: INFO: stderr: ""
Aug 21 01:33:24.887: INFO: stdout: "2020-08-21T01:33:24.736678999Z I0821 01:33:24.736423       1 logs_generator.go:76] 33 GET /api/v1/namespaces/kube-system/pods/bjs2 564\n"
Aug 21 01:33:24.888: INFO: got output "2020-08-21T01:33:24.736678999Z I0821 01:33:24.736423       1 logs_generator.go:76] 33 GET /api/v1/namespaces/kube-system/pods/bjs2 564\n"
STEP: restricting to a time range
Aug 21 01:33:27.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203 --since=1s'
Aug 21 01:33:28.703: INFO: stderr: ""
Aug 21 01:33:28.703: INFO: stdout: "I0821 01:33:27.736365       1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/kc5m 418\nI0821 01:33:27.936366       1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/8m9z 473\nI0821 01:33:28.136334       1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/hchl 517\nI0821 01:33:28.336339       1 logs_generator.go:76] 51 GET /api/v1/namespaces/ns/pods/zv2 529\nI0821 01:33:28.536370       1 logs_generator.go:76] 52 POST /api/v1/namespaces/ns/pods/r85 446\n"
Aug 21 01:33:28.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9203 --since=24h'
Aug 21 01:33:30.017: INFO: stderr: ""
Aug 21 01:33:30.017: INFO: stdout: "I0821 01:33:18.136187       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gx9 246\nI0821 01:33:18.336317       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/k5v 515\nI0821 01:33:18.536374       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/v5ds 475\nI0821 01:33:18.736375       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/9crd 295\nI0821 01:33:18.936375       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/vgg 386\nI0821 01:33:19.136403       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/sd5r 221\nI0821 01:33:19.336361       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/79ts 419\nI0821 01:33:19.536332       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/7nbx 554\nI0821 01:33:19.736317       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4s4 222\nI0821 01:33:19.936310       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/p27 567\nI0821 01:33:20.136419       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/6rkr 555\nI0821 01:33:20.336302       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/74f 291\nI0821 01:33:20.536314       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/lkdf 577\nI0821 01:33:20.736309       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/nff2 408\nI0821 01:33:20.936356       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/7qb 290\nI0821 01:33:21.136347       1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/w9vh 265\nI0821 01:33:21.336340       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/wrsh 334\nI0821 01:33:21.536329       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/2zg 410\nI0821 01:33:21.736333       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/m79 470\nI0821 01:33:21.936358       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/229h 316\nI0821 01:33:22.136373       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/ltn8 448\nI0821 01:33:22.336362       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/f847 349\nI0821 01:33:22.536383       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/d7rg 448\nI0821 01:33:22.736305       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/95m 203\nI0821 01:33:22.936354       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/s98 457\nI0821 01:33:23.136307       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/twn 496\nI0821 01:33:23.336301       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/kbl5 236\nI0821 01:33:23.536331       1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/9sj8 381\nI0821 01:33:23.736324       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/tgzp 461\nI0821 01:33:23.936334       1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/b49k 276\nI0821 01:33:24.136319       1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/bnng 444\nI0821 01:33:24.336438       1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/szf 524\nI0821 01:33:24.536312       1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/mjj2 579\nI0821 01:33:24.736423       1 logs_generator.go:76] 33 GET /api/v1/namespaces/kube-system/pods/bjs2 564\nI0821 01:33:24.936347       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/default/pods/wf94 536\nI0821 01:33:25.136359       1 logs_generator.go:76] 35 POST /api/v1/namespaces/kube-system/pods/wjf 599\nI0821 01:33:25.336363       1 logs_generator.go:76] 36 PUT /api/v1/namespaces/default/pods/z9hw 340\nI0821 01:33:25.536353       1 logs_generator.go:76] 37 POST /api/v1/namespaces/default/pods/nnt8 305\nI0821 01:33:25.736393       1 logs_generator.go:76] 38 PUT /api/v1/namespaces/default/pods/7mr2 402\nI0821 01:33:25.936440       1 logs_generator.go:76] 39 GET /api/v1/namespaces/ns/pods/lqsc 305\nI0821 01:33:26.136340       1 logs_generator.go:76] 40 POST /api/v1/namespaces/kube-system/pods/4zp 512\nI0821 01:33:26.336384       1 logs_generator.go:76] 41 PUT /api/v1/namespaces/default/pods/lm7f 544\nI0821 01:33:26.536378       1 logs_generator.go:76] 42 PUT /api/v1/namespaces/ns/pods/5db 576\nI0821 01:33:26.736424       1 logs_generator.go:76] 43 POST /api/v1/namespaces/default/pods/ntng 382\nI0821 01:33:26.936426       1 logs_generator.go:76] 44 PUT /api/v1/namespaces/default/pods/ckw8 548\nI0821 01:33:27.136353       1 logs_generator.go:76] 45 PUT /api/v1/namespaces/kube-system/pods/fgr 598\nI0821 01:33:27.336373       1 logs_generator.go:76] 46 PUT /api/v1/namespaces/kube-system/pods/vkg 450\nI0821 01:33:27.536354       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/ns/pods/kmxx 584\nI0821 01:33:27.736365       1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/kc5m 418\nI0821 01:33:27.936366       1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/8m9z 473\nI0821 01:33:28.136334       1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/hchl 517\nI0821 01:33:28.336339       1 logs_generator.go:76] 51 GET /api/v1/namespaces/ns/pods/zv2 529\nI0821 01:33:28.536370       1 logs_generator.go:76] 52 POST /api/v1/namespaces/ns/pods/r85 446\nI0821 01:33:28.736384       1 logs_generator.go:76] 53 POST /api/v1/namespaces/default/pods/dsg 285\nI0821 01:33:28.936307       1 logs_generator.go:76] 54 PUT /api/v1/namespaces/default/pods/z29 406\nI0821 01:33:29.136353       1 logs_generator.go:76] 55 PUT /api/v1/namespaces/kube-system/pods/zxt 453\nI0821 01:33:29.336338       1 logs_generator.go:76] 56 POST /api/v1/namespaces/ns/pods/6lf9 251\nI0821 01:33:29.536335       1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/q2qp 225\nI0821 01:33:29.736373       1 logs_generator.go:76] 58 GET /api/v1/namespaces/kube-system/pods/fzt 344\nI0821 01:33:29.937329       1 logs_generator.go:76] 59 GET /api/v1/namespaces/default/pods/l5x 532\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 21 01:33:30.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9203'
Aug 21 01:33:33.922: INFO: stderr: ""
Aug 21 01:33:33.922: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:33:33.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9203" for this suite.

• [SLOW TEST:19.784 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":231,"skipped":3862,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:33:33.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-vj87
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 01:33:34.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vj87" in namespace "subpath-9752" to be "success or failure"
Aug 21 01:33:34.113: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Pending", Reason="", readiness=false. Elapsed: 43.798007ms
Aug 21 01:33:36.120: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050587258s
Aug 21 01:33:38.127: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 4.057412471s
Aug 21 01:33:40.134: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 6.064845132s
Aug 21 01:33:42.142: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 8.072253812s
Aug 21 01:33:44.149: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 10.079457462s
Aug 21 01:33:46.156: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 12.086746673s
Aug 21 01:33:48.161: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 14.091989138s
Aug 21 01:33:50.168: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 16.098998458s
Aug 21 01:33:52.175: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 18.105875706s
Aug 21 01:33:54.183: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 20.113554757s
Aug 21 01:33:56.190: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Running", Reason="", readiness=true. Elapsed: 22.120813386s
Aug 21 01:33:58.198: INFO: Pod "pod-subpath-test-projected-vj87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.128621793s
STEP: Saw pod success
Aug 21 01:33:58.198: INFO: Pod "pod-subpath-test-projected-vj87" satisfied condition "success or failure"
Aug 21 01:33:58.207: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-vj87 container test-container-subpath-projected-vj87: 
STEP: delete the pod
Aug 21 01:33:58.241: INFO: Waiting for pod pod-subpath-test-projected-vj87 to disappear
Aug 21 01:33:58.258: INFO: Pod pod-subpath-test-projected-vj87 no longer exists
STEP: Deleting pod pod-subpath-test-projected-vj87
Aug 21 01:33:58.258: INFO: Deleting pod "pod-subpath-test-projected-vj87" in namespace "subpath-9752"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:33:58.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9752" for this suite.

• [SLOW TEST:24.303 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":232,"skipped":3875,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:33:58.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26
Aug 21 01:33:58.389: INFO: Pod name my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26: Found 0 pods out of 1
Aug 21 01:34:03.394: INFO: Pod name my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26: Found 1 pods out of 1
Aug 21 01:34:03.395: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26" are running
Aug 21 01:34:03.400: INFO: Pod "my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26-g72cp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 01:33:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 01:34:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 01:34:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 01:33:58 +0000 UTC Reason: Message:}])
Aug 21 01:34:03.401: INFO: Trying to dial the pod
Aug 21 01:34:08.419: INFO: Controller my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26: Got expected result from replica 1 [my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26-g72cp]: "my-hostname-basic-317f9579-563b-4f84-997b-a4799fac2b26-g72cp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:08.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3080" for this suite.

• [SLOW TEST:10.153 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":233,"skipped":3880,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:08.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-1907
STEP: creating replication controller nodeport-test in namespace services-1907
I0821 01:34:08.572607       7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1907, replica count: 2
I0821 01:34:11.624177       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 01:34:14.624943       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 01:34:14.625: INFO: Creating new exec pod
Aug 21 01:34:19.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1907 execpod4qdw6 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 21 01:34:21.298: INFO: stderr: "I0821 01:34:21.163494    3970 log.go:172] (0x400072e000) (0x400081bb80) Create stream\nI0821 01:34:21.166616    3970 log.go:172] (0x400072e000) (0x400081bb80) Stream added, broadcasting: 1\nI0821 01:34:21.181215    3970 log.go:172] (0x400072e000) Reply frame received for 1\nI0821 01:34:21.182491    3970 log.go:172] (0x400072e000) (0x400091a000) Create stream\nI0821 01:34:21.182624    3970 log.go:172] (0x400072e000) (0x400091a000) Stream added, broadcasting: 3\nI0821 01:34:21.184329    3970 log.go:172] (0x400072e000) Reply frame received for 3\nI0821 01:34:21.184595    3970 log.go:172] (0x400072e000) (0x40008fc1e0) Create stream\nI0821 01:34:21.184689    3970 log.go:172] (0x400072e000) (0x40008fc1e0) Stream added, broadcasting: 5\nI0821 01:34:21.186434    3970 log.go:172] (0x400072e000) Reply frame received for 5\nI0821 01:34:21.275906    3970 log.go:172] (0x400072e000) Data frame received for 5\nI0821 01:34:21.276479    3970 log.go:172] (0x400072e000) Data frame received for 3\nI0821 01:34:21.276619    3970 log.go:172] (0x400091a000) (3) Data frame handling\nI0821 01:34:21.276710    3970 log.go:172] (0x40008fc1e0) (5) Data frame handling\nI0821 01:34:21.277945    3970 log.go:172] (0x400072e000) Data frame received for 1\nI0821 01:34:21.278102    3970 log.go:172] (0x400081bb80) (1) Data frame handling\nI0821 01:34:21.278322    3970 log.go:172] (0x400081bb80) (1) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0821 01:34:21.279288    3970 log.go:172] (0x40008fc1e0) (5) Data frame sent\nI0821 01:34:21.279614    3970 log.go:172] (0x400072e000) Data frame received for 5\nI0821 01:34:21.279739    3970 log.go:172] (0x40008fc1e0) (5) Data frame handling\nI0821 01:34:21.279867    3970 log.go:172] (0x40008fc1e0) (5) Data frame sent\nI0821 01:34:21.280053    3970 log.go:172] (0x400072e000) Data frame received for 5\nI0821 01:34:21.280147    3970 log.go:172] (0x40008fc1e0) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0821 01:34:21.281928    3970 log.go:172] (0x400072e000) (0x400081bb80) Stream removed, broadcasting: 1\nI0821 01:34:21.283825    3970 log.go:172] (0x400072e000) Go away received\nI0821 01:34:21.287735    3970 log.go:172] (0x400072e000) (0x400081bb80) Stream removed, broadcasting: 1\nI0821 01:34:21.288114    3970 log.go:172] (0x400072e000) (0x400091a000) Stream removed, broadcasting: 3\nI0821 01:34:21.288324    3970 log.go:172] (0x400072e000) (0x40008fc1e0) Stream removed, broadcasting: 5\n"
Aug 21 01:34:21.299: INFO: stdout: ""
Aug 21 01:34:21.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1907 execpod4qdw6 -- /bin/sh -x -c nc -zv -t -w 2 10.109.102.98 80'
Aug 21 01:34:22.883: INFO: stderr: "I0821 01:34:22.757380    3996 log.go:172] (0x4000a0a000) (0x4000a12000) Create stream\nI0821 01:34:22.759511    3996 log.go:172] (0x4000a0a000) (0x4000a12000) Stream added, broadcasting: 1\nI0821 01:34:22.772919    3996 log.go:172] (0x4000a0a000) Reply frame received for 1\nI0821 01:34:22.774171    3996 log.go:172] (0x4000a0a000) (0x4000a120a0) Create stream\nI0821 01:34:22.774309    3996 log.go:172] (0x4000a0a000) (0x4000a120a0) Stream added, broadcasting: 3\nI0821 01:34:22.776553    3996 log.go:172] (0x4000a0a000) Reply frame received for 3\nI0821 01:34:22.777126    3996 log.go:172] (0x4000a0a000) (0x4000a12140) Create stream\nI0821 01:34:22.777240    3996 log.go:172] (0x4000a0a000) (0x4000a12140) Stream added, broadcasting: 5\nI0821 01:34:22.778915    3996 log.go:172] (0x4000a0a000) Reply frame received for 5\nI0821 01:34:22.867563    3996 log.go:172] (0x4000a0a000) Data frame received for 5\nI0821 01:34:22.867858    3996 log.go:172] (0x4000a0a000) Data frame received for 3\nI0821 01:34:22.867982    3996 log.go:172] (0x4000a120a0) (3) Data frame handling\nI0821 01:34:22.868076    3996 log.go:172] (0x4000a12140) (5) Data frame handling\nI0821 01:34:22.868186    3996 log.go:172] (0x4000a0a000) Data frame received for 1\nI0821 01:34:22.868271    3996 log.go:172] (0x4000a12000) (1) Data frame handling\n+ nc -zv -t -w 2 10.109.102.98 80\nConnection to 10.109.102.98 80 port [tcp/http] succeeded!\nI0821 01:34:22.869724    3996 log.go:172] (0x4000a12000) (1) Data frame sent\nI0821 01:34:22.869874    3996 log.go:172] (0x4000a12140) (5) Data frame sent\nI0821 01:34:22.869948    3996 log.go:172] (0x4000a0a000) Data frame received for 5\nI0821 01:34:22.870002    3996 log.go:172] (0x4000a12140) (5) Data frame handling\nI0821 01:34:22.870570    3996 log.go:172] (0x4000a0a000) (0x4000a12000) Stream removed, broadcasting: 1\nI0821 01:34:22.872614    3996 log.go:172] (0x4000a0a000) Go away received\nI0821 01:34:22.874334    3996 log.go:172] (0x4000a0a000) (0x4000a12000) Stream removed, broadcasting: 1\nI0821 01:34:22.874648    3996 log.go:172] (0x4000a0a000) (0x4000a120a0) Stream removed, broadcasting: 3\nI0821 01:34:22.874880    3996 log.go:172] (0x4000a0a000) (0x4000a12140) Stream removed, broadcasting: 5\n"
Aug 21 01:34:22.884: INFO: stdout: ""
Aug 21 01:34:22.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1907 execpod4qdw6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31083'
Aug 21 01:34:26.904: INFO: stderr: "I0821 01:34:26.795401    4020 log.go:172] (0x4000b76dc0) (0x40004925a0) Create stream\nI0821 01:34:26.797517    4020 log.go:172] (0x4000b76dc0) (0x40004925a0) Stream added, broadcasting: 1\nI0821 01:34:26.809977    4020 log.go:172] (0x4000b76dc0) Reply frame received for 1\nI0821 01:34:26.810516    4020 log.go:172] (0x4000b76dc0) (0x4000718000) Create stream\nI0821 01:34:26.810573    4020 log.go:172] (0x4000b76dc0) (0x4000718000) Stream added, broadcasting: 3\nI0821 01:34:26.812395    4020 log.go:172] (0x4000b76dc0) Reply frame received for 3\nI0821 01:34:26.813052    4020 log.go:172] (0x4000b76dc0) (0x4000492640) Create stream\nI0821 01:34:26.813175    4020 log.go:172] (0x4000b76dc0) (0x4000492640) Stream added, broadcasting: 5\nI0821 01:34:26.815312    4020 log.go:172] (0x4000b76dc0) Reply frame received for 5\nI0821 01:34:26.882651    4020 log.go:172] (0x4000b76dc0) Data frame received for 5\nI0821 01:34:26.883012    4020 log.go:172] (0x4000492640) (5) Data frame handling\nI0821 01:34:26.883256    4020 log.go:172] (0x4000b76dc0) Data frame received for 1\nI0821 01:34:26.883468    4020 log.go:172] (0x40004925a0) (1) Data frame handling\nI0821 01:34:26.883630    4020 log.go:172] (0x4000b76dc0) Data frame received for 3\nI0821 01:34:26.883799    4020 log.go:172] (0x4000718000) (3) Data frame handling\nI0821 01:34:26.884953    4020 log.go:172] (0x4000492640) (5) Data frame sent\nI0821 01:34:26.885466    4020 log.go:172] (0x4000b76dc0) Data frame received for 5\nI0821 01:34:26.885537    4020 log.go:172] (0x4000492640) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31083\nI0821 01:34:26.885750    4020 log.go:172] (0x40004925a0) (1) Data frame sent\nI0821 01:34:26.888888    4020 log.go:172] (0x4000b76dc0) (0x40004925a0) Stream removed, broadcasting: 1\nConnection to 172.18.0.6 31083 port [tcp/31083] succeeded!\nI0821 01:34:26.890050    4020 log.go:172] (0x4000492640) (5) Data frame sent\nI0821 01:34:26.890136    4020 log.go:172] (0x4000b76dc0) Data frame received for 5\nI0821 01:34:26.890194    4020 log.go:172] (0x4000492640) (5) Data frame handling\nI0821 01:34:26.890364    4020 log.go:172] (0x4000b76dc0) Go away received\nI0821 01:34:26.892145    4020 log.go:172] (0x4000b76dc0) (0x40004925a0) Stream removed, broadcasting: 1\nI0821 01:34:26.892432    4020 log.go:172] (0x4000b76dc0) (0x4000718000) Stream removed, broadcasting: 3\nI0821 01:34:26.892633    4020 log.go:172] (0x4000b76dc0) (0x4000492640) Stream removed, broadcasting: 5\n"
Aug 21 01:34:26.905: INFO: stdout: ""
Aug 21 01:34:26.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1907 execpod4qdw6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31083'
Aug 21 01:34:28.398: INFO: stderr: "I0821 01:34:28.267946    4057 log.go:172] (0x4000a64000) (0x400094e000) Create stream\nI0821 01:34:28.272488    4057 log.go:172] (0x4000a64000) (0x400094e000) Stream added, broadcasting: 1\nI0821 01:34:28.285163    4057 log.go:172] (0x4000a64000) Reply frame received for 1\nI0821 01:34:28.286113    4057 log.go:172] (0x4000a64000) (0x4000a3c000) Create stream\nI0821 01:34:28.286206    4057 log.go:172] (0x4000a64000) (0x4000a3c000) Stream added, broadcasting: 3\nI0821 01:34:28.288075    4057 log.go:172] (0x4000a64000) Reply frame received for 3\nI0821 01:34:28.288369    4057 log.go:172] (0x4000a64000) (0x4000809ae0) Create stream\nI0821 01:34:28.288439    4057 log.go:172] (0x4000a64000) (0x4000809ae0) Stream added, broadcasting: 5\nI0821 01:34:28.289913    4057 log.go:172] (0x4000a64000) Reply frame received for 5\nI0821 01:34:28.376516    4057 log.go:172] (0x4000a64000) Data frame received for 5\nI0821 01:34:28.376940    4057 log.go:172] (0x4000a64000) Data frame received for 1\nI0821 01:34:28.377169    4057 log.go:172] (0x4000a64000) Data frame received for 3\nI0821 01:34:28.377464    4057 log.go:172] (0x400094e000) (1) Data frame handling\nI0821 01:34:28.377661    4057 log.go:172] (0x4000a3c000) (3) Data frame handling\nI0821 01:34:28.377784    4057 log.go:172] (0x4000809ae0) (5) Data frame handling\nI0821 01:34:28.378441    4057 log.go:172] (0x4000809ae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31083\nConnection to 172.18.0.3 31083 port [tcp/31083] succeeded!\nI0821 01:34:28.379437    4057 log.go:172] (0x4000a64000) Data frame received for 5\nI0821 01:34:28.379529    4057 log.go:172] (0x4000809ae0) (5) Data frame handling\nI0821 01:34:28.379601    4057 log.go:172] (0x400094e000) (1) Data frame sent\nI0821 01:34:28.380529    4057 log.go:172] (0x4000a64000) (0x400094e000) Stream removed, broadcasting: 1\nI0821 01:34:28.384092    4057 log.go:172] (0x4000a64000) Go away received\nI0821 01:34:28.386327    4057 log.go:172] (0x4000a64000) (0x400094e000) Stream removed, broadcasting: 1\nI0821 01:34:28.386934    4057 log.go:172] (0x4000a64000) (0x4000a3c000) Stream removed, broadcasting: 3\nI0821 01:34:28.387389    4057 log.go:172] (0x4000a64000) (0x4000809ae0) Stream removed, broadcasting: 5\n"
Aug 21 01:34:28.399: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:28.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1907" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.979 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":234,"skipped":3885,"failed":0}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:28.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 21 01:34:28.485: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:35.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9766" for this suite.

• [SLOW TEST:7.453 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":235,"skipped":3890,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:35.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 21 01:34:35.946: INFO: Waiting up to 5m0s for pod "pod-13daffe0-7472-494b-9f2e-d333b29764d1" in namespace "emptydir-3457" to be "success or failure"
Aug 21 01:34:35.971: INFO: Pod "pod-13daffe0-7472-494b-9f2e-d333b29764d1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.932839ms
Aug 21 01:34:37.977: INFO: Pod "pod-13daffe0-7472-494b-9f2e-d333b29764d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030926147s
Aug 21 01:34:39.985: INFO: Pod "pod-13daffe0-7472-494b-9f2e-d333b29764d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038477809s
STEP: Saw pod success
Aug 21 01:34:39.985: INFO: Pod "pod-13daffe0-7472-494b-9f2e-d333b29764d1" satisfied condition "success or failure"
Aug 21 01:34:39.991: INFO: Trying to get logs from node jerma-worker2 pod pod-13daffe0-7472-494b-9f2e-d333b29764d1 container test-container: 
STEP: delete the pod
Aug 21 01:34:40.031: INFO: Waiting for pod pod-13daffe0-7472-494b-9f2e-d333b29764d1 to disappear
Aug 21 01:34:40.043: INFO: Pod pod-13daffe0-7472-494b-9f2e-d333b29764d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3457" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3891,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:40.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 21 01:34:44.203: INFO: &Pod{ObjectMeta:{send-events-8d6eba96-c33a-4f06-951d-8a357352582c  events-8796 /api/v1/namespaces/events-8796/pods/send-events-8d6eba96-c33a-4f06-951d-8a357352582c 2729ecca-37a3-466b-8c02-9babecf8bb2c 1998340 0 2020-08-21 01:34:40 +0000 UTC   map[name:foo time:171177291] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dwv59,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dwv59,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dwv59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:34:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:34:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:34:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:34:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.115,StartTime:2020-08-21 01:34:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 01:34:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1889e946ff087a8df75dd30542e0709966a08891685cac6941d797a5329c49e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 21 01:34:46.214: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 21 01:34:48.224: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:48.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8796" for this suite.

• [SLOW TEST:8.209 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":237,"skipped":3892,"failed":0}
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:48.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 21 01:34:52.935: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5022 pod-service-account-3efce8b4-010e-43ce-ae7d-07662c437e4d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 21 01:34:54.432: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5022 pod-service-account-3efce8b4-010e-43ce-ae7d-07662c437e4d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 21 01:34:55.922: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5022 pod-service-account-3efce8b4-010e-43ce-ae7d-07662c437e4d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:34:57.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5022" for this suite.

• [SLOW TEST:9.170 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":238,"skipped":3899,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:34:57.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 21 01:34:57.538: INFO: Waiting up to 5m0s for pod "pod-9b293a52-bd48-4002-aa08-4439bf68feeb" in namespace "emptydir-9166" to be "success or failure"
Aug 21 01:34:57.547: INFO: Pod "pod-9b293a52-bd48-4002-aa08-4439bf68feeb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.443196ms
Aug 21 01:34:59.553: INFO: Pod "pod-9b293a52-bd48-4002-aa08-4439bf68feeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01571114s
Aug 21 01:35:01.560: INFO: Pod "pod-9b293a52-bd48-4002-aa08-4439bf68feeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022801024s
STEP: Saw pod success
Aug 21 01:35:01.561: INFO: Pod "pod-9b293a52-bd48-4002-aa08-4439bf68feeb" satisfied condition "success or failure"
Aug 21 01:35:01.565: INFO: Trying to get logs from node jerma-worker2 pod pod-9b293a52-bd48-4002-aa08-4439bf68feeb container test-container: 
STEP: delete the pod
Aug 21 01:35:01.591: INFO: Waiting for pod pod-9b293a52-bd48-4002-aa08-4439bf68feeb to disappear
Aug 21 01:35:01.595: INFO: Pod pod-9b293a52-bd48-4002-aa08-4439bf68feeb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:35:01.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9166" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3914,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:35:01.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-q54v
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 01:35:01.734: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q54v" in namespace "subpath-1976" to be "success or failure"
Aug 21 01:35:01.738: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612247ms
Aug 21 01:35:03.745: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011530646s
Aug 21 01:35:05.753: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 4.018952507s
Aug 21 01:35:07.759: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 6.02548641s
Aug 21 01:35:09.766: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 8.03210151s
Aug 21 01:35:11.773: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 10.039507043s
Aug 21 01:35:13.780: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 12.045762686s
Aug 21 01:35:15.786: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 14.052501584s
Aug 21 01:35:17.792: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 16.058133449s
Aug 21 01:35:19.799: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 18.064978686s
Aug 21 01:35:21.806: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 20.071842395s
Aug 21 01:35:23.811: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Running", Reason="", readiness=true. Elapsed: 22.077114937s
Aug 21 01:35:25.821: INFO: Pod "pod-subpath-test-configmap-q54v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087208934s
STEP: Saw pod success
Aug 21 01:35:25.821: INFO: Pod "pod-subpath-test-configmap-q54v" satisfied condition "success or failure"
Aug 21 01:35:25.825: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-q54v container test-container-subpath-configmap-q54v: 
STEP: delete the pod
Aug 21 01:35:25.904: INFO: Waiting for pod pod-subpath-test-configmap-q54v to disappear
Aug 21 01:35:25.914: INFO: Pod pod-subpath-test-configmap-q54v no longer exists
STEP: Deleting pod pod-subpath-test-configmap-q54v
Aug 21 01:35:25.914: INFO: Deleting pod "pod-subpath-test-configmap-q54v" in namespace "subpath-1976"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:35:25.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1976" for this suite.

• [SLOW TEST:24.454 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":240,"skipped":3942,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:35:26.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 21 01:35:26.524: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:35:41.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6189" for this suite.

• [SLOW TEST:15.550 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:35:41.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 21 01:35:41.718: INFO: Waiting up to 5m0s for pod "client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8" in namespace "containers-5448" to be "success or failure"
Aug 21 01:35:41.773: INFO: Pod "client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8": Phase="Pending", Reason="", readiness=false. Elapsed: 54.252841ms
Aug 21 01:35:43.780: INFO: Pod "client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061339473s
Aug 21 01:35:45.787: INFO: Pod "client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068520827s
STEP: Saw pod success
Aug 21 01:35:45.787: INFO: Pod "client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8" satisfied condition "success or failure"
Aug 21 01:35:45.793: INFO: Trying to get logs from node jerma-worker2 pod client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8 container test-container: 
STEP: delete the pod
Aug 21 01:35:45.814: INFO: Waiting for pod client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8 to disappear
Aug 21 01:35:45.844: INFO: Pod client-containers-ddbddbb7-5375-4c6e-ad33-061f5baa82e8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:35:45.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5448" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3995,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:35:45.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 21 01:35:45.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9999'
Aug 21 01:35:47.553: INFO: stderr: ""
Aug 21 01:35:47.554: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 01:35:47.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9999'
Aug 21 01:35:48.833: INFO: stderr: ""
Aug 21 01:35:48.834: INFO: stdout: "update-demo-nautilus-5lt42 update-demo-nautilus-9xctw "
Aug 21 01:35:48.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lt42 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:35:50.087: INFO: stderr: ""
Aug 21 01:35:50.087: INFO: stdout: ""
Aug 21 01:35:50.087: INFO: update-demo-nautilus-5lt42 is created but not running
Aug 21 01:35:55.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9999'
Aug 21 01:35:56.362: INFO: stderr: ""
Aug 21 01:35:56.362: INFO: stdout: "update-demo-nautilus-5lt42 update-demo-nautilus-9xctw "
Aug 21 01:35:56.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lt42 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:35:57.617: INFO: stderr: ""
Aug 21 01:35:57.617: INFO: stdout: "true"
Aug 21 01:35:57.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lt42 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:35:58.873: INFO: stderr: ""
Aug 21 01:35:58.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:35:58.874: INFO: validating pod update-demo-nautilus-5lt42
Aug 21 01:35:58.879: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:35:58.879: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:35:58.879: INFO: update-demo-nautilus-5lt42 is verified up and running
Aug 21 01:35:58.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9xctw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:00.154: INFO: stderr: ""
Aug 21 01:36:00.154: INFO: stdout: "true"
Aug 21 01:36:00.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9xctw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:01.458: INFO: stderr: ""
Aug 21 01:36:01.459: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 01:36:01.459: INFO: validating pod update-demo-nautilus-9xctw
Aug 21 01:36:01.473: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 01:36:01.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 01:36:01.473: INFO: update-demo-nautilus-9xctw is verified up and running
STEP: rolling-update to new replication controller
Aug 21 01:36:01.482: INFO: scanned /root for discovery docs: 
Aug 21 01:36:01.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9999'
Aug 21 01:36:26.268: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 21 01:36:26.268: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 01:36:26.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9999'
Aug 21 01:36:27.552: INFO: stderr: ""
Aug 21 01:36:27.553: INFO: stdout: "update-demo-kitten-bfc84 update-demo-kitten-h7snm "
Aug 21 01:36:27.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bfc84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:28.790: INFO: stderr: ""
Aug 21 01:36:28.790: INFO: stdout: "true"
Aug 21 01:36:28.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bfc84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:30.039: INFO: stderr: ""
Aug 21 01:36:30.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 21 01:36:30.039: INFO: validating pod update-demo-kitten-bfc84
Aug 21 01:36:30.045: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 21 01:36:30.046: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 21 01:36:30.046: INFO: update-demo-kitten-bfc84 is verified up and running
Aug 21 01:36:30.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h7snm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:31.351: INFO: stderr: ""
Aug 21 01:36:31.351: INFO: stdout: "true"
Aug 21 01:36:31.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h7snm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9999'
Aug 21 01:36:32.615: INFO: stderr: ""
Aug 21 01:36:32.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 21 01:36:32.616: INFO: validating pod update-demo-kitten-h7snm
Aug 21 01:36:32.622: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 21 01:36:32.622: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 21 01:36:32.622: INFO: update-demo-kitten-h7snm is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:36:32.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9999" for this suite.

• [SLOW TEST:46.789 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":243,"skipped":4011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:36:32.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:36:32.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 21 01:36:32.898: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:32Z generation:1 name:name1 resourceVersion:1998912 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e707ac33-5037-4506-a811-4cb0e2a4fed4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 21 01:36:42.910: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:42Z generation:1 name:name2 resourceVersion:1998975 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:51a4720e-51ce-4530-999f-6e82ce0121e6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 21 01:36:52.921: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:32Z generation:2 name:name1 resourceVersion:1999008 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e707ac33-5037-4506-a811-4cb0e2a4fed4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 21 01:37:02.929: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:42Z generation:2 name:name2 resourceVersion:1999038 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:51a4720e-51ce-4530-999f-6e82ce0121e6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 21 01:37:12.942: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:32Z generation:2 name:name1 resourceVersion:1999069 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:e707ac33-5037-4506-a811-4cb0e2a4fed4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 21 01:37:22.953: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T01:36:42Z generation:2 name:name2 resourceVersion:1999099 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:51a4720e-51ce-4530-999f-6e82ce0121e6] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:37:33.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3535" for this suite.

• [SLOW TEST:60.834 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":244,"skipped":4049,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:37:33.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:37:33.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 01:37:52.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7531 create -f -'
Aug 21 01:37:57.062: INFO: stderr: ""
Aug 21 01:37:57.062: INFO: stdout: "e2e-test-crd-publish-openapi-7579-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 01:37:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7531 delete e2e-test-crd-publish-openapi-7579-crds test-cr'
Aug 21 01:37:58.290: INFO: stderr: ""
Aug 21 01:37:58.290: INFO: stdout: "e2e-test-crd-publish-openapi-7579-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 21 01:37:58.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7531 apply -f -'
Aug 21 01:37:59.934: INFO: stderr: ""
Aug 21 01:37:59.934: INFO: stdout: "e2e-test-crd-publish-openapi-7579-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 01:37:59.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7531 delete e2e-test-crd-publish-openapi-7579-crds test-cr'
Aug 21 01:38:01.181: INFO: stderr: ""
Aug 21 01:38:01.181: INFO: stdout: "e2e-test-crd-publish-openapi-7579-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 01:38:01.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7579-crds'
Aug 21 01:38:02.790: INFO: stderr: ""
Aug 21 01:38:02.790: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7579-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:38:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7531" for this suite.

• [SLOW TEST:49.108 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":245,"skipped":4056,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:38:22.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-436/configmap-test-e208c2bd-3cab-4556-a9c1-463f939e38bb
STEP: Creating a pod to test consume configMaps
Aug 21 01:38:22.694: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226" in namespace "configmap-436" to be "success or failure"
Aug 21 01:38:22.702: INFO: Pod "pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226": Phase="Pending", Reason="", readiness=false. Elapsed: 7.041219ms
Aug 21 01:38:24.739: INFO: Pod "pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044678831s
Aug 21 01:38:26.746: INFO: Pod "pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050891564s
STEP: Saw pod success
Aug 21 01:38:26.746: INFO: Pod "pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226" satisfied condition "success or failure"
Aug 21 01:38:26.751: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226 container env-test: 
STEP: delete the pod
Aug 21 01:38:26.909: INFO: Waiting for pod pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226 to disappear
Aug 21 01:38:26.913: INFO: Pod pod-configmaps-d2a21ab1-a5e5-47cd-8e31-942529043226 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:38:26.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-436" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4081,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:38:26.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:38:38.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9209" for this suite.

• [SLOW TEST:11.215 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":247,"skipped":4091,"failed":0}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:38:38.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-7hjs
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 01:38:38.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7hjs" in namespace "subpath-5717" to be "success or failure"
Aug 21 01:38:38.247: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.072096ms
Aug 21 01:38:40.254: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020823617s
Aug 21 01:38:42.262: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 4.028277341s
Aug 21 01:38:44.269: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 6.035551675s
Aug 21 01:38:46.275: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 8.041230134s
Aug 21 01:38:48.282: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 10.04906667s
Aug 21 01:38:50.290: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 12.056731992s
Aug 21 01:38:52.297: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 14.063765986s
Aug 21 01:38:54.304: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 16.070562976s
Aug 21 01:38:56.311: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 18.077702087s
Aug 21 01:38:58.319: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 20.085173294s
Aug 21 01:39:00.326: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Running", Reason="", readiness=true. Elapsed: 22.092373832s
Aug 21 01:39:02.334: INFO: Pod "pod-subpath-test-secret-7hjs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100426199s
STEP: Saw pod success
Aug 21 01:39:02.334: INFO: Pod "pod-subpath-test-secret-7hjs" satisfied condition "success or failure"
Aug 21 01:39:02.339: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-7hjs container test-container-subpath-secret-7hjs: 
STEP: delete the pod
Aug 21 01:39:02.364: INFO: Waiting for pod pod-subpath-test-secret-7hjs to disappear
Aug 21 01:39:02.367: INFO: Pod pod-subpath-test-secret-7hjs no longer exists
STEP: Deleting pod pod-subpath-test-secret-7hjs
Aug 21 01:39:02.368: INFO: Deleting pod "pod-subpath-test-secret-7hjs" in namespace "subpath-5717"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:39:02.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5717" for this suite.

• [SLOW TEST:24.237 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":248,"skipped":4091,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:39:02.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 21 01:39:06.563: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5664 PodName:pod-sharedvolume-cc46b5f5-d38f-47cd-9f70-6f3d76683938 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 01:39:06.563: INFO: >>> kubeConfig: /root/.kube/config
I0821 01:39:06.620626       7 log.go:172] (0x4002b0aa50) (0x4001e9c000) Create stream
I0821 01:39:06.620855       7 log.go:172] (0x4002b0aa50) (0x4001e9c000) Stream added, broadcasting: 1
I0821 01:39:06.624318       7 log.go:172] (0x4002b0aa50) Reply frame received for 1
I0821 01:39:06.624573       7 log.go:172] (0x4002b0aa50) (0x40013a3400) Create stream
I0821 01:39:06.624686       7 log.go:172] (0x4002b0aa50) (0x40013a3400) Stream added, broadcasting: 3
I0821 01:39:06.626749       7 log.go:172] (0x4002b0aa50) Reply frame received for 3
I0821 01:39:06.626934       7 log.go:172] (0x4002b0aa50) (0x4001e9c0a0) Create stream
I0821 01:39:06.627026       7 log.go:172] (0x4002b0aa50) (0x4001e9c0a0) Stream added, broadcasting: 5
I0821 01:39:06.628708       7 log.go:172] (0x4002b0aa50) Reply frame received for 5
I0821 01:39:06.698033       7 log.go:172] (0x4002b0aa50) Data frame received for 5
I0821 01:39:06.698245       7 log.go:172] (0x4001e9c0a0) (5) Data frame handling
I0821 01:39:06.698414       7 log.go:172] (0x4002b0aa50) Data frame received for 3
I0821 01:39:06.698570       7 log.go:172] (0x40013a3400) (3) Data frame handling
I0821 01:39:06.698723       7 log.go:172] (0x40013a3400) (3) Data frame sent
I0821 01:39:06.698849       7 log.go:172] (0x4002b0aa50) Data frame received for 3
I0821 01:39:06.698964       7 log.go:172] (0x40013a3400) (3) Data frame handling
I0821 01:39:06.699928       7 log.go:172] (0x4002b0aa50) Data frame received for 1
I0821 01:39:06.700104       7 log.go:172] (0x4001e9c000) (1) Data frame handling
I0821 01:39:06.700265       7 log.go:172] (0x4001e9c000) (1) Data frame sent
I0821 01:39:06.700433       7 log.go:172] (0x4002b0aa50) (0x4001e9c000) Stream removed, broadcasting: 1
I0821 01:39:06.700626       7 log.go:172] (0x4002b0aa50) Go away received
I0821 01:39:06.701323       7 log.go:172] (0x4002b0aa50) (0x4001e9c000) Stream removed, broadcasting: 1
I0821 01:39:06.701473       7 log.go:172] (0x4002b0aa50) (0x40013a3400) Stream removed, broadcasting: 3
I0821 01:39:06.701585       7 log.go:172] (0x4002b0aa50) (0x4001e9c0a0) Stream removed, broadcasting: 5
Aug 21 01:39:06.701: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:39:06.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5664" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":249,"skipped":4125,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:39:06.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6397
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-6397
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6397
Aug 21 01:39:06.859: INFO: Found 0 stateful pods, waiting for 1
Aug 21 01:39:16.867: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 21 01:39:16.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:39:18.408: INFO: stderr: "I0821 01:39:18.240339    4599 log.go:172] (0x4000a7a000) (0x400070c000) Create stream\nI0821 01:39:18.246796    4599 log.go:172] (0x4000a7a000) (0x400070c000) Stream added, broadcasting: 1\nI0821 01:39:18.259961    4599 log.go:172] (0x4000a7a000) Reply frame received for 1\nI0821 01:39:18.260623    4599 log.go:172] (0x4000a7a000) (0x400070c0a0) Create stream\nI0821 01:39:18.260687    4599 log.go:172] (0x4000a7a000) (0x400070c0a0) Stream added, broadcasting: 3\nI0821 01:39:18.262261    4599 log.go:172] (0x4000a7a000) Reply frame received for 3\nI0821 01:39:18.262563    4599 log.go:172] (0x4000a7a000) (0x400075a000) Create stream\nI0821 01:39:18.262628    4599 log.go:172] (0x4000a7a000) (0x400075a000) Stream added, broadcasting: 5\nI0821 01:39:18.264099    4599 log.go:172] (0x4000a7a000) Reply frame received for 5\nI0821 01:39:18.349480    4599 log.go:172] (0x4000a7a000) Data frame received for 5\nI0821 01:39:18.349667    4599 log.go:172] (0x400075a000) (5) Data frame handling\nI0821 01:39:18.350014    4599 log.go:172] (0x400075a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:39:18.383660    4599 log.go:172] (0x4000a7a000) Data frame received for 3\nI0821 01:39:18.383767    4599 log.go:172] (0x400070c0a0) (3) Data frame handling\nI0821 01:39:18.383863    4599 log.go:172] (0x400070c0a0) (3) Data frame sent\nI0821 01:39:18.384059    4599 log.go:172] (0x4000a7a000) Data frame received for 5\nI0821 01:39:18.384266    4599 log.go:172] (0x400075a000) (5) Data frame handling\nI0821 01:39:18.384504    4599 log.go:172] (0x4000a7a000) Data frame received for 3\nI0821 01:39:18.384706    4599 log.go:172] (0x400070c0a0) (3) Data frame handling\nI0821 01:39:18.386301    4599 log.go:172] (0x4000a7a000) Data frame received for 1\nI0821 01:39:18.386431    4599 log.go:172] (0x400070c000) (1) Data frame handling\nI0821 01:39:18.386561    4599 log.go:172] (0x400070c000) (1) Data frame sent\nI0821 01:39:18.387929    4599 log.go:172] (0x4000a7a000) (0x400070c000) Stream removed, broadcasting: 1\nI0821 01:39:18.392176    4599 log.go:172] (0x4000a7a000) Go away received\nI0821 01:39:18.395200    4599 log.go:172] (0x4000a7a000) (0x400070c000) Stream removed, broadcasting: 1\nI0821 01:39:18.396065    4599 log.go:172] (0x4000a7a000) (0x400070c0a0) Stream removed, broadcasting: 3\nI0821 01:39:18.396458    4599 log.go:172] (0x4000a7a000) (0x400075a000) Stream removed, broadcasting: 5\n"
Aug 21 01:39:18.409: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:39:18.409: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 01:39:18.440: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 01:39:18.440: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:39:18.475: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:18.477: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:18.477: INFO: 
Aug 21 01:39:18.477: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 21 01:39:19.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985605135s
Aug 21 01:39:20.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977952081s
Aug 21 01:39:21.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.775098546s
Aug 21 01:39:22.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.73253643s
Aug 21 01:39:23.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.723124266s
Aug 21 01:39:24.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.714483598s
Aug 21 01:39:25.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.704967225s
Aug 21 01:39:26.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.696347559s
Aug 21 01:39:27.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 687.207323ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6397
Aug 21 01:39:28.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:39:30.266: INFO: stderr: "I0821 01:39:30.133302    4622 log.go:172] (0x4000122f20) (0x400070fea0) Create stream\nI0821 01:39:30.135883    4622 log.go:172] (0x4000122f20) (0x400070fea0) Stream added, broadcasting: 1\nI0821 01:39:30.150958    4622 log.go:172] (0x4000122f20) Reply frame received for 1\nI0821 01:39:30.151592    4622 log.go:172] (0x4000122f20) (0x40006ac780) Create stream\nI0821 01:39:30.151655    4622 log.go:172] (0x4000122f20) (0x40006ac780) Stream added, broadcasting: 3\nI0821 01:39:30.153000    4622 log.go:172] (0x4000122f20) Reply frame received for 3\nI0821 01:39:30.153256    4622 log.go:172] (0x4000122f20) (0x400045f540) Create stream\nI0821 01:39:30.153312    4622 log.go:172] (0x4000122f20) (0x400045f540) Stream added, broadcasting: 5\nI0821 01:39:30.154785    4622 log.go:172] (0x4000122f20) Reply frame received for 5\nI0821 01:39:30.240637    4622 log.go:172] (0x4000122f20) Data frame received for 5\nI0821 01:39:30.241213    4622 log.go:172] (0x4000122f20) Data frame received for 3\nI0821 01:39:30.241763    4622 log.go:172] (0x4000122f20) Data frame received for 1\nI0821 01:39:30.241944    4622 log.go:172] (0x400070fea0) (1) Data frame handling\nI0821 01:39:30.242030    4622 log.go:172] (0x40006ac780) (3) Data frame handling\nI0821 01:39:30.242210    4622 log.go:172] (0x400045f540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 01:39:30.243947    4622 log.go:172] (0x400070fea0) (1) Data frame sent\nI0821 01:39:30.244716    4622 log.go:172] (0x400045f540) (5) Data frame sent\nI0821 01:39:30.244903    4622 log.go:172] (0x4000122f20) Data frame received for 5\nI0821 01:39:30.244996    4622 log.go:172] (0x40006ac780) (3) Data frame sent\nI0821 01:39:30.245086    4622 log.go:172] (0x4000122f20) Data frame received for 3\nI0821 01:39:30.245146    4622 log.go:172] (0x40006ac780) (3) Data frame handling\nI0821 01:39:30.245295    4622 log.go:172] (0x400045f540) (5) Data frame handling\nI0821 01:39:30.247264    4622 log.go:172] (0x4000122f20) (0x400070fea0) Stream removed, broadcasting: 1\nI0821 01:39:30.247844    4622 log.go:172] (0x4000122f20) Go away received\nI0821 01:39:30.251579    4622 log.go:172] (0x4000122f20) (0x400070fea0) Stream removed, broadcasting: 1\nI0821 01:39:30.251956    4622 log.go:172] (0x4000122f20) (0x40006ac780) Stream removed, broadcasting: 3\nI0821 01:39:30.252192    4622 log.go:172] (0x4000122f20) (0x400045f540) Stream removed, broadcasting: 5\n"
Aug 21 01:39:30.267: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 01:39:30.267: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 01:39:30.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:39:31.720: INFO: stderr: "I0821 01:39:31.613508    4644 log.go:172] (0x4000a86000) (0x4000742000) Create stream\nI0821 01:39:31.617227    4644 log.go:172] (0x4000a86000) (0x4000742000) Stream added, broadcasting: 1\nI0821 01:39:31.626109    4644 log.go:172] (0x4000a86000) Reply frame received for 1\nI0821 01:39:31.626660    4644 log.go:172] (0x4000a86000) (0x40007f3d60) Create stream\nI0821 01:39:31.626716    4644 log.go:172] (0x4000a86000) (0x40007f3d60) Stream added, broadcasting: 3\nI0821 01:39:31.627796    4644 log.go:172] (0x4000a86000) Reply frame received for 3\nI0821 01:39:31.628028    4644 log.go:172] (0x4000a86000) (0x40007420a0) Create stream\nI0821 01:39:31.628080    4644 log.go:172] (0x4000a86000) (0x40007420a0) Stream added, broadcasting: 5\nI0821 01:39:31.629201    4644 log.go:172] (0x4000a86000) Reply frame received for 5\nI0821 01:39:31.697366    4644 log.go:172] (0x4000a86000) Data frame received for 3\nI0821 01:39:31.697740    4644 log.go:172] (0x4000a86000) Data frame received for 5\nI0821 01:39:31.698024    4644 log.go:172] (0x40007f3d60) (3) Data frame handling\nI0821 01:39:31.698373    4644 log.go:172] (0x40007420a0) (5) Data frame handling\nI0821 01:39:31.698637    4644 log.go:172] (0x4000a86000) Data frame received for 1\nI0821 01:39:31.698798    4644 log.go:172] (0x4000742000) (1) Data frame handling\nI0821 01:39:31.701055    4644 log.go:172] (0x40007f3d60) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 01:39:31.701733    4644 log.go:172] (0x4000a86000) Data frame received for 3\nI0821 01:39:31.701867    4644 log.go:172] (0x40007f3d60) (3) Data frame handling\nI0821 01:39:31.702228    4644 log.go:172] (0x40007420a0) (5) Data frame sent\nI0821 01:39:31.702355    4644 log.go:172] (0x4000a86000) Data frame received for 5\nI0821 01:39:31.702518    4644 log.go:172] (0x40007420a0) (5) Data frame handling\nI0821 01:39:31.702753    4644 log.go:172] (0x4000742000) (1) Data frame sent\nI0821 01:39:31.705695    4644 log.go:172] (0x4000a86000) (0x4000742000) Stream removed, broadcasting: 1\nI0821 01:39:31.706680    4644 log.go:172] (0x4000a86000) Go away received\nI0821 01:39:31.709845    4644 log.go:172] (0x4000a86000) (0x4000742000) Stream removed, broadcasting: 1\nI0821 01:39:31.710131    4644 log.go:172] (0x4000a86000) (0x40007f3d60) Stream removed, broadcasting: 3\nI0821 01:39:31.710317    4644 log.go:172] (0x4000a86000) (0x40007420a0) Stream removed, broadcasting: 5\n"
Aug 21 01:39:31.721: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 01:39:31.721: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 01:39:31.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:39:33.207: INFO: stderr: "I0821 01:39:33.062020    4667 log.go:172] (0x4000a74c60) (0x400070a140) Create stream\nI0821 01:39:33.068319    4667 log.go:172] (0x4000a74c60) (0x400070a140) Stream added, broadcasting: 1\nI0821 01:39:33.079785    4667 log.go:172] (0x4000a74c60) Reply frame received for 1\nI0821 01:39:33.080314    4667 log.go:172] (0x4000a74c60) (0x40007ce000) Create stream\nI0821 01:39:33.080369    4667 log.go:172] (0x4000a74c60) (0x40007ce000) Stream added, broadcasting: 3\nI0821 01:39:33.082367    4667 log.go:172] (0x4000a74c60) Reply frame received for 3\nI0821 01:39:33.082948    4667 log.go:172] (0x4000a74c60) (0x40007e4000) Create stream\nI0821 01:39:33.083070    4667 log.go:172] (0x4000a74c60) (0x40007e4000) Stream added, broadcasting: 5\nI0821 01:39:33.085312    4667 log.go:172] (0x4000a74c60) Reply frame received for 5\nI0821 01:39:33.183509    4667 log.go:172] (0x4000a74c60) Data frame received for 1\nI0821 01:39:33.183874    4667 log.go:172] (0x4000a74c60) Data frame received for 3\nI0821 01:39:33.184168    4667 log.go:172] (0x40007ce000) (3) Data frame handling\nI0821 01:39:33.184457    4667 log.go:172] (0x4000a74c60) Data frame received for 5\nI0821 01:39:33.184597    4667 log.go:172] (0x40007e4000) (5) Data frame handling\nI0821 01:39:33.184936    4667 log.go:172] (0x400070a140) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 01:39:33.186624    4667 log.go:172] (0x40007e4000) (5) Data frame sent\nI0821 01:39:33.186754    4667 log.go:172] (0x400070a140) (1) Data frame sent\nI0821 01:39:33.187939    4667 log.go:172] (0x4000a74c60) Data frame received for 5\nI0821 01:39:33.188077    4667 log.go:172] (0x40007e4000) (5) Data frame handling\nI0821 01:39:33.188894    4667 log.go:172] (0x40007ce000) (3) Data frame sent\nI0821 01:39:33.188992    4667 log.go:172] (0x4000a74c60) Data frame received for 3\nI0821 01:39:33.189067    4667 log.go:172] (0x40007ce000) (3) Data frame handling\nI0821 01:39:33.190425    4667 log.go:172] (0x4000a74c60) (0x400070a140) Stream removed, broadcasting: 1\nI0821 01:39:33.191228    4667 log.go:172] (0x4000a74c60) Go away received\nI0821 01:39:33.194188    4667 log.go:172] (0x4000a74c60) (0x400070a140) Stream removed, broadcasting: 1\nI0821 01:39:33.195431    4667 log.go:172] (0x4000a74c60) (0x40007ce000) Stream removed, broadcasting: 3\nI0821 01:39:33.196478    4667 log.go:172] (0x4000a74c60) (0x40007e4000) Stream removed, broadcasting: 5\n"
Aug 21 01:39:33.208: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 01:39:33.208: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 01:39:33.215: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:39:33.215: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:39:33.216: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 21 01:39:33.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:39:34.698: INFO: stderr: "I0821 01:39:34.564437    4691 log.go:172] (0x40009cabb0) (0x400077a000) Create stream\nI0821 01:39:34.572433    4691 log.go:172] (0x40009cabb0) (0x400077a000) Stream added, broadcasting: 1\nI0821 01:39:34.587061    4691 log.go:172] (0x40009cabb0) Reply frame received for 1\nI0821 01:39:34.587718    4691 log.go:172] (0x40009cabb0) (0x4000691a40) Create stream\nI0821 01:39:34.587781    4691 log.go:172] (0x40009cabb0) (0x4000691a40) Stream added, broadcasting: 3\nI0821 01:39:34.589443    4691 log.go:172] (0x40009cabb0) Reply frame received for 3\nI0821 01:39:34.589704    4691 log.go:172] (0x40009cabb0) (0x4000764000) Create stream\nI0821 01:39:34.589769    4691 log.go:172] (0x40009cabb0) (0x4000764000) Stream added, broadcasting: 5\nI0821 01:39:34.591243    4691 log.go:172] (0x40009cabb0) Reply frame received for 5\nI0821 01:39:34.675117    4691 log.go:172] (0x40009cabb0) Data frame received for 5\nI0821 01:39:34.675363    4691 log.go:172] (0x40009cabb0) Data frame received for 3\nI0821 01:39:34.675502    4691 log.go:172] (0x40009cabb0) Data frame received for 1\nI0821 01:39:34.675720    4691 log.go:172] (0x4000691a40) (3) Data frame handling\nI0821 01:39:34.675871    4691 log.go:172] (0x4000764000) (5) Data frame handling\nI0821 01:39:34.676319    4691 log.go:172] (0x400077a000) (1) Data frame handling\nI0821 01:39:34.677393    4691 log.go:172] (0x4000691a40) (3) Data frame sent\nI0821 01:39:34.677977    4691 log.go:172] (0x40009cabb0) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:39:34.678111    4691 log.go:172] (0x4000691a40) (3) Data frame handling\nI0821 01:39:34.678675    4691 log.go:172] (0x4000764000) (5) Data frame sent\nI0821 01:39:34.678795    4691 log.go:172] (0x40009cabb0) Data frame received for 5\nI0821 01:39:34.679007    4691 log.go:172] (0x400077a000) (1) Data frame sent\nI0821 01:39:34.679167    4691 log.go:172] (0x4000764000) (5) Data frame handling\nI0821 01:39:34.681050    4691 log.go:172] (0x40009cabb0) (0x400077a000) Stream removed, broadcasting: 1\nI0821 01:39:34.684596    4691 log.go:172] (0x40009cabb0) Go away received\nI0821 01:39:34.687260    4691 log.go:172] (0x40009cabb0) (0x400077a000) Stream removed, broadcasting: 1\nI0821 01:39:34.687984    4691 log.go:172] (0x40009cabb0) (0x4000691a40) Stream removed, broadcasting: 3\nI0821 01:39:34.688361    4691 log.go:172] (0x40009cabb0) (0x4000764000) Stream removed, broadcasting: 5\n"
Aug 21 01:39:34.699: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:39:34.699: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 01:39:34.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:39:36.200: INFO: stderr: "I0821 01:39:36.043995    4714 log.go:172] (0x400011e2c0) (0x4000960000) Create stream\nI0821 01:39:36.048987    4714 log.go:172] (0x400011e2c0) (0x4000960000) Stream added, broadcasting: 1\nI0821 01:39:36.062249    4714 log.go:172] (0x400011e2c0) Reply frame received for 1\nI0821 01:39:36.063638    4714 log.go:172] (0x400011e2c0) (0x4000aec000) Create stream\nI0821 01:39:36.063871    4714 log.go:172] (0x400011e2c0) (0x4000aec000) Stream added, broadcasting: 3\nI0821 01:39:36.065935    4714 log.go:172] (0x400011e2c0) Reply frame received for 3\nI0821 01:39:36.066245    4714 log.go:172] (0x400011e2c0) (0x40008139a0) Create stream\nI0821 01:39:36.066340    4714 log.go:172] (0x400011e2c0) (0x40008139a0) Stream added, broadcasting: 5\nI0821 01:39:36.067589    4714 log.go:172] (0x400011e2c0) Reply frame received for 5\nI0821 01:39:36.112869    4714 log.go:172] (0x400011e2c0) Data frame received for 5\nI0821 01:39:36.113126    4714 log.go:172] (0x40008139a0) (5) Data frame handling\nI0821 01:39:36.113559    4714 log.go:172] (0x40008139a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:39:36.176902    4714 log.go:172] (0x400011e2c0) Data frame received for 3\nI0821 01:39:36.177007    4714 log.go:172] (0x4000aec000) (3) Data frame handling\nI0821 01:39:36.177073    4714 log.go:172] (0x4000aec000) (3) Data frame sent\nI0821 01:39:36.177456    4714 log.go:172] (0x400011e2c0) Data frame received for 5\nI0821 01:39:36.177737    4714 log.go:172] (0x40008139a0) (5) Data frame handling\nI0821 01:39:36.177975    4714 log.go:172] (0x400011e2c0) Data frame received for 3\nI0821 01:39:36.178119    4714 log.go:172] (0x4000aec000) (3) Data frame handling\nI0821 01:39:36.179316    4714 log.go:172] (0x400011e2c0) Data frame received for 1\nI0821 01:39:36.179514    4714 log.go:172] (0x4000960000) (1) Data frame handling\nI0821 01:39:36.179700    4714 log.go:172] (0x4000960000) (1) Data frame sent\nI0821 01:39:36.181532    4714 log.go:172] (0x400011e2c0) (0x4000960000) Stream removed, broadcasting: 1\nI0821 01:39:36.184275    4714 log.go:172] (0x400011e2c0) Go away received\nI0821 01:39:36.188680    4714 log.go:172] (0x400011e2c0) (0x4000960000) Stream removed, broadcasting: 1\nI0821 01:39:36.189134    4714 log.go:172] (0x400011e2c0) (0x4000aec000) Stream removed, broadcasting: 3\nI0821 01:39:36.189326    4714 log.go:172] (0x400011e2c0) (0x40008139a0) Stream removed, broadcasting: 5\n"
Aug 21 01:39:36.201: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:39:36.201: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 01:39:36.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 01:39:37.698: INFO: stderr: "I0821 01:39:37.528902    4737 log.go:172] (0x40006c8000) (0x4000b02000) Create stream\nI0821 01:39:37.533824    4737 log.go:172] (0x40006c8000) (0x4000b02000) Stream added, broadcasting: 1\nI0821 01:39:37.545095    4737 log.go:172] (0x40006c8000) Reply frame received for 1\nI0821 01:39:37.545886    4737 log.go:172] (0x40006c8000) (0x4000990000) Create stream\nI0821 01:39:37.545971    4737 log.go:172] (0x40006c8000) (0x4000990000) Stream added, broadcasting: 3\nI0821 01:39:37.547510    4737 log.go:172] (0x40006c8000) Reply frame received for 3\nI0821 01:39:37.547761    4737 log.go:172] (0x40006c8000) (0x40006e9b80) Create stream\nI0821 01:39:37.547822    4737 log.go:172] (0x40006c8000) (0x40006e9b80) Stream added, broadcasting: 5\nI0821 01:39:37.549105    4737 log.go:172] (0x40006c8000) Reply frame received for 5\nI0821 01:39:37.648118    4737 log.go:172] (0x40006c8000) Data frame received for 5\nI0821 01:39:37.648471    4737 log.go:172] (0x40006e9b80) (5) Data frame handling\nI0821 01:39:37.649481    4737 log.go:172] (0x40006e9b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 01:39:37.675226    4737 log.go:172] (0x40006c8000) Data frame received for 3\nI0821 01:39:37.675456    4737 log.go:172] (0x4000990000) (3) Data frame handling\nI0821 01:39:37.675613    4737 log.go:172] (0x40006c8000) Data frame received for 5\nI0821 01:39:37.675794    4737 log.go:172] (0x40006e9b80) (5) Data frame handling\nI0821 01:39:37.676112    4737 log.go:172] (0x4000990000) (3) Data frame sent\nI0821 01:39:37.676298    4737 log.go:172] (0x40006c8000) Data frame received for 3\nI0821 01:39:37.676468    4737 log.go:172] (0x4000990000) (3) Data frame handling\nI0821 01:39:37.676664    4737 log.go:172] (0x40006c8000) Data frame received for 1\nI0821 01:39:37.676923    4737 log.go:172] (0x4000b02000) (1) Data frame handling\nI0821 01:39:37.677032    4737 log.go:172] (0x4000b02000) (1) Data frame sent\nI0821 01:39:37.681156    4737 log.go:172] (0x40006c8000) (0x4000b02000) Stream removed, broadcasting: 1\nI0821 01:39:37.683252    4737 log.go:172] (0x40006c8000) Go away received\nI0821 01:39:37.685750    4737 log.go:172] (0x40006c8000) (0x4000b02000) Stream removed, broadcasting: 1\nI0821 01:39:37.686835    4737 log.go:172] (0x40006c8000) (0x4000990000) Stream removed, broadcasting: 3\nI0821 01:39:37.687205    4737 log.go:172] (0x40006c8000) (0x40006e9b80) Stream removed, broadcasting: 5\n"
Aug 21 01:39:37.699: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 01:39:37.699: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 01:39:37.699: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:39:37.706: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 21 01:39:47.720: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 01:39:47.721: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 01:39:47.721: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 01:39:47.737: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:47.737: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:47.738: INFO: ss-1  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:47.738: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:47.739: INFO: 
Aug 21 01:39:47.739: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:48.748: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:48.749: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:48.749: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:48.749: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:48.750: INFO: 
Aug 21 01:39:48.750: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:49.759: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:49.760: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:49.760: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:49.761: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:49.761: INFO: 
Aug 21 01:39:49.761: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:50.768: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:50.768: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:50.768: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:50.768: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:50.768: INFO: 
Aug 21 01:39:50.769: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:51.777: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:51.777: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:51.778: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:51.778: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:51.778: INFO: 
Aug 21 01:39:51.778: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:52.787: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:52.787: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:52.787: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:52.788: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:52.788: INFO: 
Aug 21 01:39:52.788: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:53.799: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:53.800: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:53.800: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:53.800: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:53.800: INFO: 
Aug 21 01:39:53.800: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:54.810: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:54.810: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:54.810: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:54.811: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:54.811: INFO: 
Aug 21 01:39:54.811: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:55.821: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:55.821: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:55.822: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:55.822: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:55.822: INFO: 
Aug 21 01:39:55.822: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 01:39:56.830: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 01:39:56.830: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:06 +0000 UTC  }]
Aug 21 01:39:56.831: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:56.831: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 01:39:18 +0000 UTC  }]
Aug 21 01:39:56.831: INFO: 
Aug 21 01:39:56.831: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6397
Aug 21 01:39:57.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:39:59.190: INFO: rc: 1
Aug 21 01:39:59.190: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 21 01:40:09.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:40:10.459: INFO: rc: 1
Aug 21 01:40:10.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:40:20.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:40:21.693: INFO: rc: 1
Aug 21 01:40:21.693: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:40:31.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:40:32.965: INFO: rc: 1
Aug 21 01:40:32.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:40:42.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:40:44.216: INFO: rc: 1
Aug 21 01:40:44.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:40:54.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:40:55.479: INFO: rc: 1
Aug 21 01:40:55.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:41:05.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:41:06.753: INFO: rc: 1
Aug 21 01:41:06.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:41:16.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:41:18.021: INFO: rc: 1
Aug 21 01:41:18.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:41:28.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:41:29.238: INFO: rc: 1
Aug 21 01:41:29.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:41:39.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:41:40.469: INFO: rc: 1
Aug 21 01:41:40.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:41:50.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:41:51.762: INFO: rc: 1
Aug 21 01:41:51.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:01.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:03.009: INFO: rc: 1
Aug 21 01:42:03.009: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:13.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:14.258: INFO: rc: 1
Aug 21 01:42:14.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:24.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:25.523: INFO: rc: 1
Aug 21 01:42:25.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:35.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:36.768: INFO: rc: 1
Aug 21 01:42:36.769: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:46.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:48.030: INFO: rc: 1
Aug 21 01:42:48.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:42:58.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:42:59.259: INFO: rc: 1
Aug 21 01:42:59.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:43:09.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:43:10.490: INFO: rc: 1
Aug 21 01:43:10.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:43:20.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:43:21.736: INFO: rc: 1
Aug 21 01:43:21.736: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:43:31.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:43:32.965: INFO: rc: 1
Aug 21 01:43:32.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:43:42.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:43:44.214: INFO: rc: 1
Aug 21 01:43:44.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:43:54.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:43:55.483: INFO: rc: 1
Aug 21 01:43:55.484: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:44:05.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:44:06.710: INFO: rc: 1
Aug 21 01:44:06.710: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:44:16.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:44:17.922: INFO: rc: 1
Aug 21 01:44:17.922: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:44:27.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:44:29.177: INFO: rc: 1
Aug 21 01:44:29.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:44:39.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:44:40.465: INFO: rc: 1
Aug 21 01:44:40.466: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:44:50.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:44:51.744: INFO: rc: 1
Aug 21 01:44:51.744: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 01:45:01.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 01:45:03.018: INFO: rc: 1
Aug 21 01:45:03.019: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Aug 21 01:45:03.019: INFO: Scaling statefulset ss to 0
Aug 21 01:45:03.059: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 01:45:03.061: INFO: Deleting all statefulset in ns statefulset-6397
Aug 21 01:45:03.064: INFO: Scaling statefulset ss to 0
Aug 21 01:45:03.074: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:45:03.077: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:45:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6397" for this suite.

• [SLOW TEST:356.411 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":250,"skipped":4133,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:45:03.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0821 01:45:43.686683       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 01:45:43.686: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:45:43.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3176" for this suite.

• [SLOW TEST:40.566 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":251,"skipped":4145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:45:43.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:45:59.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8612" for this suite.

• [SLOW TEST:16.184 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":252,"skipped":4185,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:45:59.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 21 01:45:59.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 21 01:46:01.217: INFO: stderr: ""
Aug 21 01:46:01.217: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:01.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8205" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":253,"skipped":4204,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:01.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9ff081e8-4859-4fe4-80d0-1ddf70824fd9
STEP: Creating configMap with name cm-test-opt-upd-30eef222-67a5-428e-9697-e46ebe2845f3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9ff081e8-4859-4fe4-80d0-1ddf70824fd9
STEP: Updating configmap cm-test-opt-upd-30eef222-67a5-428e-9697-e46ebe2845f3
STEP: Creating configMap with name cm-test-opt-create-2993d631-5699-4a2b-84ba-8b8969dcc503
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:11.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-251" for this suite.

• [SLOW TEST:10.544 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4204,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:11.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:46:11.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c" in namespace "downward-api-717" to be "success or failure"
Aug 21 01:46:11.930: INFO: Pod "downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.581612ms
Aug 21 01:46:13.936: INFO: Pod "downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057847173s
Aug 21 01:46:15.944: INFO: Pod "downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065751493s
STEP: Saw pod success
Aug 21 01:46:15.944: INFO: Pod "downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c" satisfied condition "success or failure"
Aug 21 01:46:15.949: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c container client-container: 
STEP: delete the pod
Aug 21 01:46:16.165: INFO: Waiting for pod downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c to disappear
Aug 21 01:46:16.190: INFO: Pod downwardapi-volume-81238484-0836-487c-b0da-a8fc5a25ef8c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:16.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-717" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4215,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:16.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:46:16.324: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c087da22-8194-4c3d-9284-92d765b204de" in namespace "security-context-test-3740" to be "success or failure"
Aug 21 01:46:16.357: INFO: Pod "busybox-readonly-false-c087da22-8194-4c3d-9284-92d765b204de": Phase="Pending", Reason="", readiness=false. Elapsed: 31.973862ms
Aug 21 01:46:18.433: INFO: Pod "busybox-readonly-false-c087da22-8194-4c3d-9284-92d765b204de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108763312s
Aug 21 01:46:20.440: INFO: Pod "busybox-readonly-false-c087da22-8194-4c3d-9284-92d765b204de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115190017s
Aug 21 01:46:20.440: INFO: Pod "busybox-readonly-false-c087da22-8194-4c3d-9284-92d765b204de" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:20.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3740" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4220,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:20.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-jxzh
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 01:46:20.834: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jxzh" in namespace "subpath-347" to be "success or failure"
Aug 21 01:46:20.855: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Pending", Reason="", readiness=false. Elapsed: 20.243301ms
Aug 21 01:46:22.906: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071969886s
Aug 21 01:46:24.914: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 4.079722467s
Aug 21 01:46:26.948: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 6.113058831s
Aug 21 01:46:28.954: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 8.119719194s
Aug 21 01:46:30.961: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 10.126713293s
Aug 21 01:46:32.978: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 12.143207046s
Aug 21 01:46:34.984: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 14.149397696s
Aug 21 01:46:36.990: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 16.155501749s
Aug 21 01:46:38.996: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 18.16165823s
Aug 21 01:46:41.007: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 20.172948417s
Aug 21 01:46:43.024: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Running", Reason="", readiness=true. Elapsed: 22.189449833s
Aug 21 01:46:45.030: INFO: Pod "pod-subpath-test-downwardapi-jxzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.19566843s
STEP: Saw pod success
Aug 21 01:46:45.030: INFO: Pod "pod-subpath-test-downwardapi-jxzh" satisfied condition "success or failure"
Aug 21 01:46:45.034: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-jxzh container test-container-subpath-downwardapi-jxzh: 
STEP: delete the pod
Aug 21 01:46:45.052: INFO: Waiting for pod pod-subpath-test-downwardapi-jxzh to disappear
Aug 21 01:46:45.056: INFO: Pod pod-subpath-test-downwardapi-jxzh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-jxzh
Aug 21 01:46:45.056: INFO: Deleting pod "pod-subpath-test-downwardapi-jxzh" in namespace "subpath-347"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:45.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-347" for this suite.

• [SLOW TEST:24.612 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":257,"skipped":4230,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:45.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 21 01:46:45.183: INFO: Waiting up to 5m0s for pod "downward-api-a34985c4-2675-4471-aa07-63a1747e5314" in namespace "downward-api-3238" to be "success or failure"
Aug 21 01:46:45.200: INFO: Pod "downward-api-a34985c4-2675-4471-aa07-63a1747e5314": Phase="Pending", Reason="", readiness=false. Elapsed: 17.48677ms
Aug 21 01:46:47.205: INFO: Pod "downward-api-a34985c4-2675-4471-aa07-63a1747e5314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022313643s
Aug 21 01:46:49.211: INFO: Pod "downward-api-a34985c4-2675-4471-aa07-63a1747e5314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02789328s
STEP: Saw pod success
Aug 21 01:46:49.211: INFO: Pod "downward-api-a34985c4-2675-4471-aa07-63a1747e5314" satisfied condition "success or failure"
Aug 21 01:46:49.216: INFO: Trying to get logs from node jerma-worker pod downward-api-a34985c4-2675-4471-aa07-63a1747e5314 container dapi-container: 
STEP: delete the pod
Aug 21 01:46:49.263: INFO: Waiting for pod downward-api-a34985c4-2675-4471-aa07-63a1747e5314 to disappear
Aug 21 01:46:49.274: INFO: Pod downward-api-a34985c4-2675-4471-aa07-63a1747e5314 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:49.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3238" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:49.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:46:49.528: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5da9a3ed-88e7-4d4c-8ba3-df7b64a21950", Controller:(*bool)(0x40045db2ca), BlockOwnerDeletion:(*bool)(0x40045db2cb)}}
Aug 21 01:46:49.603: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b5078b0c-fb79-49a9-8c80-f96f1d70a367", Controller:(*bool)(0x400423f85a), BlockOwnerDeletion:(*bool)(0x400423f85b)}}
Aug 21 01:46:49.641: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c917db35-30b4-4fc5-ab70-51dc0ea38838", Controller:(*bool)(0x40045db46a), BlockOwnerDeletion:(*bool)(0x40045db46b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:46:54.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9152" for this suite.

• [SLOW TEST:5.387 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":259,"skipped":4278,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:46:54.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 21 01:46:54.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001442 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 01:46:54.778: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001442 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 21 01:47:04.791: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001487 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 21 01:47:04.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001487 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 21 01:47:14.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001518 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 01:47:14.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001518 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 21 01:47:24.817: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001548 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 01:47:24.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-a 90f8b7a0-f213-4693-83e0-d19cef6c4e48 2001548 0 2020-08-21 01:46:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 21 01:47:34.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b 2f89060a-07d5-4553-b98b-eee3e93d98d2 2001578 0 2020-08-21 01:47:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 01:47:34.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b 2f89060a-07d5-4553-b98b-eee3e93d98d2 2001578 0 2020-08-21 01:47:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 21 01:47:44.841: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b 2f89060a-07d5-4553-b98b-eee3e93d98d2 2001608 0 2020-08-21 01:47:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 01:47:44.842: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4549 /api/v1/namespaces/watch-4549/configmaps/e2e-watch-test-configmap-b 2f89060a-07d5-4553-b98b-eee3e93d98d2 2001608 0 2020-08-21 01:47:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:47:54.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4549" for this suite.

• [SLOW TEST:60.166 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":260,"skipped":4287,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:47:54.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 01:47:54.931: INFO: Waiting up to 5m0s for pod "pod-50165598-55e4-4045-a8d3-d9582b84a330" in namespace "emptydir-1863" to be "success or failure"
Aug 21 01:47:54.944: INFO: Pod "pod-50165598-55e4-4045-a8d3-d9582b84a330": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033378ms
Aug 21 01:47:56.951: INFO: Pod "pod-50165598-55e4-4045-a8d3-d9582b84a330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019453377s
Aug 21 01:47:58.958: INFO: Pod "pod-50165598-55e4-4045-a8d3-d9582b84a330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026378932s
STEP: Saw pod success
Aug 21 01:47:58.958: INFO: Pod "pod-50165598-55e4-4045-a8d3-d9582b84a330" satisfied condition "success or failure"
Aug 21 01:47:58.963: INFO: Trying to get logs from node jerma-worker2 pod pod-50165598-55e4-4045-a8d3-d9582b84a330 container test-container: 
STEP: delete the pod
Aug 21 01:47:59.018: INFO: Waiting for pod pod-50165598-55e4-4045-a8d3-d9582b84a330 to disappear
Aug 21 01:47:59.036: INFO: Pod pod-50165598-55e4-4045-a8d3-d9582b84a330 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:47:59.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1863" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4322,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:47:59.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 01:47:59.260: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:47:59.303: INFO: Number of nodes with available pods: 0
Aug 21 01:47:59.303: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:00.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:00.378: INFO: Number of nodes with available pods: 0
Aug 21 01:48:00.378: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:01.311: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:01.316: INFO: Number of nodes with available pods: 0
Aug 21 01:48:01.316: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:02.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:02.321: INFO: Number of nodes with available pods: 0
Aug 21 01:48:02.321: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:03.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:03.321: INFO: Number of nodes with available pods: 0
Aug 21 01:48:03.321: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:04.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:04.333: INFO: Number of nodes with available pods: 2
Aug 21 01:48:04.333: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 21 01:48:04.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:04.369: INFO: Number of nodes with available pods: 1
Aug 21 01:48:04.369: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:05.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:05.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:05.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:06.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:06.429: INFO: Number of nodes with available pods: 1
Aug 21 01:48:06.429: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:07.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:07.386: INFO: Number of nodes with available pods: 1
Aug 21 01:48:07.386: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:08.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:08.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:08.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:09.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:09.386: INFO: Number of nodes with available pods: 1
Aug 21 01:48:09.386: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:10.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:10.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:10.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:11.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:11.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:11.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:12.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:12.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:12.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:13.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:13.386: INFO: Number of nodes with available pods: 1
Aug 21 01:48:13.386: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:14.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:14.387: INFO: Number of nodes with available pods: 1
Aug 21 01:48:14.387: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 01:48:15.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 01:48:15.387: INFO: Number of nodes with available pods: 2
Aug 21 01:48:15.387: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5472, will wait for the garbage collector to delete the pods
Aug 21 01:48:15.453: INFO: Deleting DaemonSet.extensions daemon-set took: 8.097658ms
Aug 21 01:48:17.554: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100786984s
Aug 21 01:48:31.760: INFO: Number of nodes with available pods: 0
Aug 21 01:48:31.761: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 01:48:31.765: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5472/daemonsets","resourceVersion":"2001829"},"items":null}

Aug 21 01:48:31.768: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5472/pods","resourceVersion":"2001829"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:48:31.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5472" for this suite.

• [SLOW TEST:32.746 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":262,"skipped":4337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:48:31.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:48:31.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac" in namespace "projected-5613" to be "success or failure"
Aug 21 01:48:31.910: INFO: Pod "downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.626332ms
Aug 21 01:48:33.916: INFO: Pod "downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014207895s
Aug 21 01:48:35.923: INFO: Pod "downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020637179s
STEP: Saw pod success
Aug 21 01:48:35.923: INFO: Pod "downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac" satisfied condition "success or failure"
Aug 21 01:48:35.927: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac container client-container: 
STEP: delete the pod
Aug 21 01:48:35.959: INFO: Waiting for pod downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac to disappear
Aug 21 01:48:35.990: INFO: Pod downwardapi-volume-1e0a52f7-143d-4dc5-a9d8-7cebf44a73ac no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:48:35.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5613" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4390,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:48:36.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 01:48:38.398: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 01:48:40.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571318, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571318, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571318, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571318, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 01:48:43.473: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:48:43.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7397" for this suite.
STEP: Destroying namespace "webhook-7397-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.995 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":264,"skipped":4404,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:48:44.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:48:44.152: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 21 01:48:49.158: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 01:48:49.159: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 21 01:48:53.222: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-9817 /apis/apps/v1/namespaces/deployment-9817/deployments/test-cleanup-deployment e06a66eb-c848-43a7-9488-b06423e3457e 2002065 1 2020-08-21 01:48:49 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400423e2e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 01:48:49 +0000 UTC,LastTransitionTime:2020-08-21 01:48:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-21 01:48:52 +0000 UTC,LastTransitionTime:2020-08-21 01:48:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 01:48:53.229: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-9817 /apis/apps/v1/namespaces/deployment-9817/replicasets/test-cleanup-deployment-55ffc6b7b6 cb8bca6a-6613-4e34-8bef-b40d195ad543 2002054 1 2020-08-21 01:48:49 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e06a66eb-c848-43a7-9488-b06423e3457e 0x4004a5dfa7 0x4004a5dfa8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40045da018  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 01:48:53.235: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-rxt9x" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-rxt9x test-cleanup-deployment-55ffc6b7b6- deployment-9817 /api/v1/namespaces/deployment-9817/pods/test-cleanup-deployment-55ffc6b7b6-rxt9x 9174c2c6-8b30-41c5-ae9a-7b1fb237110e 2002053 0 2020-08-21 01:48:49 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 cb8bca6a-6613-4e34-8bef-b40d195ad543 0x40045da387 0x40045da388}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6sft2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6sft2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6sft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:48:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 01:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.126,StartTime:2020-08-21 01:48:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 01:48:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://16a9be3f1001edfcf5cc6720acf4120ed65c606d7509dfb28a3c2840915206ec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:48:53.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9817" for this suite.

• [SLOW TEST:9.246 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":265,"skipped":4415,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:48:53.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 01:48:53.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f" in namespace "projected-8513" to be "success or failure"
Aug 21 01:48:53.565: INFO: Pod "downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.815749ms
Aug 21 01:48:55.571: INFO: Pod "downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012218993s
Aug 21 01:48:57.577: INFO: Pod "downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018854128s
STEP: Saw pod success
Aug 21 01:48:57.577: INFO: Pod "downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f" satisfied condition "success or failure"
Aug 21 01:48:57.581: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f container client-container: 
STEP: delete the pod
Aug 21 01:48:57.602: INFO: Waiting for pod downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f to disappear
Aug 21 01:48:57.607: INFO: Pod downwardapi-volume-5cc0bc20-aff4-44bf-9304-29eae3e86b5f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:48:57.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8513" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4425,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:48:57.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 21 01:48:57.728: INFO: Waiting up to 5m0s for pod "downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4" in namespace "downward-api-3652" to be "success or failure"
Aug 21 01:48:57.773: INFO: Pod "downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.581616ms
Aug 21 01:48:59.909: INFO: Pod "downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180788465s
Aug 21 01:49:01.916: INFO: Pod "downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187848769s
STEP: Saw pod success
Aug 21 01:49:01.916: INFO: Pod "downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4" satisfied condition "success or failure"
Aug 21 01:49:01.921: INFO: Trying to get logs from node jerma-worker pod downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4 container dapi-container: 
STEP: delete the pod
Aug 21 01:49:02.001: INFO: Waiting for pod downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4 to disappear
Aug 21 01:49:02.032: INFO: Pod downward-api-6b980e49-d23e-4a2b-b222-c25e812c5df4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:49:02.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3652" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4427,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:49:02.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9332
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 21 01:49:02.201: INFO: Found 0 stateful pods, waiting for 3
Aug 21 01:49:12.209: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:49:12.209: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:49:12.209: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 21 01:49:12.308: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 21 01:49:22.353: INFO: Updating stateful set ss2
Aug 21 01:49:22.435: INFO: Waiting for Pod statefulset-9332/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 21 01:49:33.042: INFO: Found 2 stateful pods, waiting for 3
Aug 21 01:49:43.049: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:49:43.049: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 01:49:43.049: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 21 01:49:43.078: INFO: Updating stateful set ss2
Aug 21 01:49:43.109: INFO: Waiting for Pod statefulset-9332/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 01:49:53.146: INFO: Updating stateful set ss2
Aug 21 01:49:53.173: INFO: Waiting for StatefulSet statefulset-9332/ss2 to complete update
Aug 21 01:49:53.174: INFO: Waiting for Pod statefulset-9332/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 01:50:03.196: INFO: Deleting all statefulset in ns statefulset-9332
Aug 21 01:50:03.202: INFO: Scaling statefulset ss2 to 0
Aug 21 01:50:23.277: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 01:50:23.281: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:50:23.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9332" for this suite.

• [SLOW TEST:81.278 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":268,"skipped":4440,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:50:23.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:50:23.454: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:50:24.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3980" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":269,"skipped":4461,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:50:24.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-589efcd8-b1da-439a-8554-c4295234328f
STEP: Creating a pod to test consume configMaps
Aug 21 01:50:24.632: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0" in namespace "configmap-60" to be "success or failure"
Aug 21 01:50:24.681: INFO: Pod "pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 49.414037ms
Aug 21 01:50:26.691: INFO: Pod "pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059246996s
Aug 21 01:50:28.700: INFO: Pod "pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068046131s
STEP: Saw pod success
Aug 21 01:50:28.700: INFO: Pod "pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0" satisfied condition "success or failure"
Aug 21 01:50:28.707: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0 container configmap-volume-test: 
STEP: delete the pod
Aug 21 01:50:28.754: INFO: Waiting for pod pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0 to disappear
Aug 21 01:50:28.794: INFO: Pod pod-configmaps-f5cc7844-7fdf-4bb9-9c60-429990f4a4a0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:50:28.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-60" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4476,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:50:28.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 01:50:28.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 01:50:48.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3760 create -f -'
Aug 21 01:50:52.552: INFO: stderr: ""
Aug 21 01:50:52.552: INFO: stdout: "e2e-test-crd-publish-openapi-1097-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 21 01:50:52.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3760 delete e2e-test-crd-publish-openapi-1097-crds test-cr'
Aug 21 01:50:53.846: INFO: stderr: ""
Aug 21 01:50:53.846: INFO: stdout: "e2e-test-crd-publish-openapi-1097-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 21 01:50:53.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3760 apply -f -'
Aug 21 01:50:55.476: INFO: stderr: ""
Aug 21 01:50:55.477: INFO: stdout: "e2e-test-crd-publish-openapi-1097-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 21 01:50:55.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3760 delete e2e-test-crd-publish-openapi-1097-crds test-cr'
Aug 21 01:50:56.715: INFO: stderr: ""
Aug 21 01:50:56.716: INFO: stdout: "e2e-test-crd-publish-openapi-1097-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 21 01:50:56.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1097-crds'
Aug 21 01:50:58.269: INFO: stderr: ""
Aug 21 01:50:58.269: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1097-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:51:08.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3760" for this suite.

• [SLOW TEST:39.551 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":271,"skipped":4477,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:51:08.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 21 01:51:08.504: INFO: Waiting up to 5m0s for pod "downward-api-fffa727e-aea6-4d85-96c6-338960a08084" in namespace "downward-api-6862" to be "success or failure"
Aug 21 01:51:08.514: INFO: Pod "downward-api-fffa727e-aea6-4d85-96c6-338960a08084": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411637ms
Aug 21 01:51:10.522: INFO: Pod "downward-api-fffa727e-aea6-4d85-96c6-338960a08084": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017839337s
Aug 21 01:51:12.528: INFO: Pod "downward-api-fffa727e-aea6-4d85-96c6-338960a08084": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024275232s
STEP: Saw pod success
Aug 21 01:51:12.528: INFO: Pod "downward-api-fffa727e-aea6-4d85-96c6-338960a08084" satisfied condition "success or failure"
Aug 21 01:51:12.534: INFO: Trying to get logs from node jerma-worker pod downward-api-fffa727e-aea6-4d85-96c6-338960a08084 container dapi-container: 
STEP: delete the pod
Aug 21 01:51:12.571: INFO: Waiting for pod downward-api-fffa727e-aea6-4d85-96c6-338960a08084 to disappear
Aug 21 01:51:12.580: INFO: Pod downward-api-fffa727e-aea6-4d85-96c6-338960a08084 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:51:12.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6862" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4488,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:51:12.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:51:12.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2873" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4488,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:51:12.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 21 01:51:12.886: INFO: Waiting up to 5m0s for pod "pod-7c990c40-8f01-4600-878d-9732252b0be7" in namespace "emptydir-6506" to be "success or failure"
Aug 21 01:51:12.899: INFO: Pod "pod-7c990c40-8f01-4600-878d-9732252b0be7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.216046ms
Aug 21 01:51:14.907: INFO: Pod "pod-7c990c40-8f01-4600-878d-9732252b0be7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02051223s
Aug 21 01:51:16.913: INFO: Pod "pod-7c990c40-8f01-4600-878d-9732252b0be7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02699011s
STEP: Saw pod success
Aug 21 01:51:16.913: INFO: Pod "pod-7c990c40-8f01-4600-878d-9732252b0be7" satisfied condition "success or failure"
Aug 21 01:51:16.918: INFO: Trying to get logs from node jerma-worker pod pod-7c990c40-8f01-4600-878d-9732252b0be7 container test-container: 
STEP: delete the pod
Aug 21 01:51:16.940: INFO: Waiting for pod pod-7c990c40-8f01-4600-878d-9732252b0be7 to disappear
Aug 21 01:51:16.951: INFO: Pod pod-7c990c40-8f01-4600-878d-9732252b0be7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:51:16.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6506" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4491,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:51:16.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-79500374-b5a7-4035-8f00-ce8911ba8c5a
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-79500374-b5a7-4035-8f00-ce8911ba8c5a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:52:33.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5187" for this suite.

• [SLOW TEST:76.729 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4497,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:52:33.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 01:52:35.142: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 01:52:37.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571555, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571555, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571555, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733571555, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 01:52:40.354: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:52:52.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5671" for this suite.
STEP: Destroying namespace "webhook-5671-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.077 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":276,"skipped":4502,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:52:52.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 21 01:52:52.872: INFO: Waiting up to 5m0s for pod "downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202" in namespace "downward-api-1004" to be "success or failure"
Aug 21 01:52:52.876: INFO: Pod "downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202": Phase="Pending", Reason="", readiness=false. Elapsed: 3.988545ms
Aug 21 01:52:54.883: INFO: Pod "downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011158588s
Aug 21 01:52:56.890: INFO: Pod "downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018259525s
STEP: Saw pod success
Aug 21 01:52:56.891: INFO: Pod "downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202" satisfied condition "success or failure"
Aug 21 01:52:56.895: INFO: Trying to get logs from node jerma-worker2 pod downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202 container dapi-container: 
STEP: delete the pod
Aug 21 01:52:56.972: INFO: Waiting for pod downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202 to disappear
Aug 21 01:52:56.983: INFO: Pod downward-api-e93fd5a9-ae7b-4cee-a82c-dae741cb5202 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:52:56.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1004" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4532,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 01:52:56.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 01:53:01.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9236" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":278,"skipped":4536,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 21 01:53:01.263: INFO: Running AfterSuite actions on all nodes
Aug 21 01:53:01.264: INFO: Running AfterSuite actions on node 1
Aug 21 01:53:01.264: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 6119.832 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS