I0917 16:27:48.272358 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0917 16:27:48.277659 7 e2e.go:109] Starting e2e run "838b0961-96dd-41e5-b72d-ccfdfd7426b6" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1600360052 - Will randomize all specs Will run 278 of 4844 specs Sep 17 16:27:48.859: INFO: >>> kubeConfig: /root/.kube/config Sep 17 16:27:48.907: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 17 16:27:49.105: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 17 16:27:49.277: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 17 16:27:49.278: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 17 16:27:49.278: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 17 16:27:49.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 17 16:27:49.317: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 17 16:27:49.317: INFO: e2e test version: v1.17.11 Sep 17 16:27:49.322: INFO: kube-apiserver version: v1.17.5 Sep 17 16:27:49.324: INFO: >>> kubeConfig: /root/.kube/config Sep 17 16:27:49.352: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:27:49.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Sep 17 16:27:49.502: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-9ac7042c-2ec4-4e59-8a5d-dd0e06af2120 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-9ac7042c-2ec4-4e59-8a5d-dd0e06af2120 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:27:55.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7153" for this suite. • [SLOW TEST:6.293 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":14,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:27:55.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:27:55.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0" in namespace "projected-2621" to be "success or failure" Sep 17 16:27:55.761: INFO: Pod "downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.597678ms Sep 17 16:27:57.768: INFO: Pod "downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026853708s Sep 17 16:27:59.777: INFO: Pod "downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03562567s STEP: Saw pod success Sep 17 16:27:59.778: INFO: Pod "downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0" satisfied condition "success or failure" Sep 17 16:27:59.783: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0 container client-container: STEP: delete the pod Sep 17 16:27:59.841: INFO: Waiting for pod downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0 to disappear Sep 17 16:27:59.886: INFO: Pod downwardapi-volume-060f9041-6a93-40a0-97bd-08f4168028c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:27:59.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2621" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:27:59.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:06.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-709" for this suite. • [SLOW TEST:6.205 seconds] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":32,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:06.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 17 16:28:10.773: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:10.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-340" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:10.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 17 16:28:10.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-857' Sep 17 16:28:16.025: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 17 16:28:16.025: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496 Sep 17 16:28:16.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-857' Sep 17 16:28:17.218: INFO: stderr: "" Sep 17 16:28:17.218: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:17.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-857" for this suite. • [SLOW TEST:6.386 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":5,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:17.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 17 16:28:25.326: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 17 16:28:27.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:28:29.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735956905, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:28:32.444: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:28:32.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:33.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6224" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:16.156 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":6,"skipped":91,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:33.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3190.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 16:28:41.590: INFO: DNS probes using dns-3190/dns-test-91101bf1-8a64-41c5-be9b-67fd354283d8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3190" for this suite. • [SLOW TEST:8.630 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":7,"skipped":93,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:42.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:28:42.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb" in namespace "downward-api-7666" to be "success or failure" Sep 17 16:28:42.494: INFO: Pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb": Phase="Pending", Reason="", readiness=false. Elapsed: 54.501138ms Sep 17 16:28:44.514: INFO: Pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074898209s Sep 17 16:28:46.522: INFO: Pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb": Phase="Running", Reason="", readiness=true. Elapsed: 4.082612949s Sep 17 16:28:48.530: INFO: Pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090958633s STEP: Saw pod success Sep 17 16:28:48.530: INFO: Pod "downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb" satisfied condition "success or failure" Sep 17 16:28:48.536: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb container client-container: STEP: delete the pod Sep 17 16:28:48.559: INFO: Waiting for pod downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb to disappear Sep 17 16:28:48.580: INFO: Pod downwardapi-volume-597c053e-e44c-428b-826b-da6000707edb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:48.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7666" for this suite. • [SLOW TEST:6.573 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":93,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:48.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-5984e1b6-38b7-47ad-a3ce-a8232b920f46 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:28:56.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4138" for this suite. • [SLOW TEST:8.357 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":105,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:28:56.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:28:57.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201" in namespace "projected-4514" to be "success or failure" Sep 17 16:28:57.261: INFO: Pod "downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201": Phase="Pending", Reason="", readiness=false. Elapsed: 25.886151ms Sep 17 16:28:59.441: INFO: Pod "downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205265099s Sep 17 16:29:01.449: INFO: Pod "downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213195663s STEP: Saw pod success Sep 17 16:29:01.449: INFO: Pod "downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201" satisfied condition "success or failure" Sep 17 16:29:01.454: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201 container client-container: STEP: delete the pod Sep 17 16:29:01.516: INFO: Waiting for pod downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201 to disappear Sep 17 16:29:01.529: INFO: Pod downwardapi-volume-298e9489-8e15-4452-bce6-80214bd07201 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:29:01.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4514" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:29:01.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:29:01.606: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 17 16:29:19.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8778 create -f -' Sep 17 16:29:25.793: INFO: stderr: "" Sep 17 16:29:25.793: INFO: stdout: "e2e-test-crd-publish-openapi-1209-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 17 16:29:25.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8778 delete e2e-test-crd-publish-openapi-1209-crds test-cr' Sep 17 16:29:26.924: INFO: stderr: "" Sep 17 16:29:26.924: INFO: stdout: "e2e-test-crd-publish-openapi-1209-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 17 16:29:26.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8778 apply -f -' Sep 17 16:29:28.406: INFO: stderr: "" Sep 17 16:29:28.406: INFO: stdout: "e2e-test-crd-publish-openapi-1209-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 17 16:29:28.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8778 delete e2e-test-crd-publish-openapi-1209-crds test-cr' Sep 17 16:29:29.497: INFO: stderr: "" Sep 17 16:29:29.497: INFO: stdout: "e2e-test-crd-publish-openapi-1209-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 17 16:29:29.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1209-crds' Sep 17 16:29:30.916: INFO: stderr: "" Sep 17 16:29:30.916: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1209-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:29:49.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8778" for this suite. • [SLOW TEST:48.124 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":11,"skipped":136,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:29:49.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-5464 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5464 to expose endpoints map[] Sep 17 16:29:49.913: INFO: successfully validated that service endpoint-test2 in namespace services-5464 exposes endpoints map[] (12.962504ms elapsed) STEP: Creating pod pod1 in namespace services-5464 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5464 to expose endpoints map[pod1:[80]] Sep 17 16:29:54.121: INFO: successfully validated that service endpoint-test2 in namespace services-5464 exposes endpoints map[pod1:[80]] (4.183597181s elapsed) STEP: Creating pod pod2 in namespace services-5464 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5464 to expose endpoints map[pod1:[80] pod2:[80]] Sep 17 16:29:57.282: INFO: successfully validated that service endpoint-test2 in namespace services-5464 exposes endpoints map[pod1:[80] pod2:[80]] (3.153753109s elapsed) STEP: Deleting pod pod1 in namespace services-5464 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5464 to expose endpoints map[pod2:[80]] Sep 17 16:29:57.309: INFO: successfully validated that service endpoint-test2 in namespace services-5464 exposes endpoints map[pod2:[80]] (19.457029ms elapsed) STEP: Deleting pod pod2 in namespace services-5464 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5464 to expose endpoints map[] Sep 17 16:29:57.332: INFO: successfully validated that service endpoint-test2 in namespace services-5464 exposes endpoints map[] (18.144472ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:29:57.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5464" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.732 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":12,"skipped":143,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:29:57.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Sep 17 16:29:57.816: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 17 16:29:57.834: INFO: Waiting for terminating namespaces to be deleted... Sep 17 16:29:57.839: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Sep 17 16:29:57.869: INFO: kube-proxy-4jmbs from kube-system started at 2020-09-13 16:54:28 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.870: INFO: Container kube-proxy ready: true, restart count 0 Sep 17 16:29:57.870: INFO: kindnet-m6c7w from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.870: INFO: Container kindnet-cni ready: true, restart count 0 Sep 17 16:29:57.870: INFO: pod2 from services-5464 started at 2020-09-17 16:29:54 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.870: INFO: Container pause ready: true, restart count 0 Sep 17 16:29:57.870: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Sep 17 16:29:57.885: INFO: pod1 from services-5464 started at 2020-09-17 16:29:49 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.886: INFO: Container pause ready: true, restart count 0 Sep 17 16:29:57.886: INFO: kube-proxy-2w9xp from kube-system started at 2020-09-13 16:54:31 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.886: INFO: Container kube-proxy ready: true, restart count 0 Sep 17 16:29:57.886: INFO: kindnet-4ckzg from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 16:29:57.886: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16359f1ef61a3e38], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:29:59.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5194" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":13,"skipped":147,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:30:00.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 17 16:30:00.666: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061426 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 17 16:30:00.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061426 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 17 16:30:10.683: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061510 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Sep 17 16:30:10.684: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061510 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 17 16:30:20.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061561 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 17 16:30:20.714: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061561 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 17 16:30:30.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061606 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 17 16:30:30.725: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-a a66b6eec-3ec8-4517-9a8e-d35f97fd41c5 1061606 0 2020-09-17 16:30:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 17 16:30:40.736: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-b 60e226ee-23a4-4802-9328-e93d2540209c 1061637 0 2020-09-17 16:30:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 17 16:30:40.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-b 60e226ee-23a4-4802-9328-e93d2540209c 1061637 0 2020-09-17 16:30:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 17 16:30:50.746: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-b 60e226ee-23a4-4802-9328-e93d2540209c 1061667 0 2020-09-17 16:30:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 17 16:30:50.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-512 /api/v1/namespaces/watch-512/configmaps/e2e-watch-test-configmap-b 60e226ee-23a4-4802-9328-e93d2540209c 1061667 0 2020-09-17 16:30:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:00.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-512" for this suite. • [SLOW TEST:60.643 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":14,"skipped":160,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-fe8bbd02-c8ae-416e-9dcc-22d948a62495 STEP: Creating a pod to test consume configMaps Sep 17 16:31:00.874: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb" in namespace "projected-1121" to be "success or failure" Sep 17 16:31:00.879: INFO: Pod "pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479912ms Sep 17 16:31:02.886: INFO: Pod "pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011365161s Sep 17 16:31:04.893: INFO: Pod "pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018210059s STEP: Saw pod success Sep 17 16:31:04.893: INFO: Pod "pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb" satisfied condition "success or failure" Sep 17 16:31:04.897: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb container projected-configmap-volume-test: STEP: delete the pod Sep 17 16:31:04.930: INFO: Waiting for pod pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb to disappear Sep 17 16:31:04.950: INFO: Pod pod-projected-configmaps-077d36f2-3a90-4d20-8b91-b5f2eeec50fb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:04.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1121" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":173,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:04.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 17 16:31:05.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2032' Sep 17 16:31:06.318: INFO: stderr: "" Sep 17 16:31:06.318: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Sep 17 16:31:06.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2032' Sep 17 16:31:08.359: INFO: stderr: "" Sep 17 16:31:08.359: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:08.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2032" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":16,"skipped":184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:08.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 17 16:31:08.479: INFO: Waiting up to 5m0s for pod "pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94" in namespace "emptydir-6369" to be "success or failure" Sep 17 16:31:08.483: INFO: Pod "pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407024ms Sep 17 16:31:10.491: INFO: Pod "pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012568795s Sep 17 16:31:13.291: INFO: Pod "pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.811883534s STEP: Saw pod success Sep 17 16:31:13.291: INFO: Pod "pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94" satisfied condition "success or failure" Sep 17 16:31:13.478: INFO: Trying to get logs from node jerma-worker pod pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94 container test-container: STEP: delete the pod Sep 17 16:31:13.654: INFO: Waiting for pod pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94 to disappear Sep 17 16:31:13.707: INFO: Pod pod-4d0ff5f3-4a13-44fb-b0d3-ad5f1da90c94 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:13.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6369" for this suite. • [SLOW TEST:5.351 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":238,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:13.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Sep 17 16:31:14.379: INFO: created pod pod-service-account-defaultsa Sep 17 16:31:14.379: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 17 16:31:14.388: INFO: created pod pod-service-account-mountsa Sep 17 16:31:14.388: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 17 16:31:14.394: INFO: created pod pod-service-account-nomountsa Sep 17 16:31:14.395: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 17 16:31:14.459: INFO: created pod pod-service-account-defaultsa-mountspec Sep 17 16:31:14.460: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 17 16:31:14.517: INFO: created pod pod-service-account-mountsa-mountspec Sep 17 16:31:14.517: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 17 16:31:14.552: INFO: created pod pod-service-account-nomountsa-mountspec Sep 17 16:31:14.552: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 17 16:31:14.594: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 17 16:31:14.594: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 17 16:31:14.619: INFO: created pod pod-service-account-mountsa-nomountspec Sep 17 16:31:14.619: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 17 16:31:14.661: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 17 16:31:14.661: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:14.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9374" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":18,"skipped":240,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:14.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8887 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 17 16:31:14.929: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 17 16:31:53.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostname&protocol=http&host=10.244.1.217&port=8080&tries=1'] Namespace:pod-network-test-8887 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 16:31:53.127: INFO: >>> kubeConfig: /root/.kube/config I0917 16:31:53.236210 7 log.go:172] (0x780c380) (0x780c3f0) Create stream I0917 16:31:53.236581 7 log.go:172] (0x780c380) (0x780c3f0) Stream added, broadcasting: 1 I0917 16:31:53.250545 7 log.go:172] (0x780c380) Reply frame received for 1 I0917 16:31:53.250934 7 log.go:172] (0x780c380) (0x7dac0e0) Create stream I0917 16:31:53.251001 7 log.go:172] (0x780c380) (0x7dac0e0) Stream added, broadcasting: 3 I0917 16:31:53.252845 7 log.go:172] (0x780c380) Reply frame received for 3 I0917 16:31:53.253361 7 log.go:172] (0x780c380) (0x780c5b0) Create stream I0917 16:31:53.253483 7 log.go:172] (0x780c380) (0x780c5b0) Stream added, broadcasting: 5 I0917 16:31:53.255020 7 log.go:172] (0x780c380) Reply frame received for 5 I0917 16:31:53.351005 7 log.go:172] (0x780c380) Data frame received for 5 I0917 16:31:53.351469 7 log.go:172] (0x780c380) Data frame received for 3 I0917 16:31:53.351733 7 log.go:172] (0x7dac0e0) (3) Data frame handling I0917 16:31:53.352256 7 log.go:172] (0x780c5b0) (5) Data frame handling I0917 16:31:53.352908 7 log.go:172] (0x7dac0e0) (3) Data frame sent I0917 16:31:53.353370 7 log.go:172] (0x780c380) Data frame received for 1 I0917 16:31:53.353549 7 log.go:172] (0x780c3f0) (1) Data frame handling I0917 16:31:53.353733 7 log.go:172] (0x780c3f0) (1) Data frame sent I0917 16:31:53.353911 7 log.go:172] (0x780c380) Data frame received for 3 I0917 16:31:53.354039 7 log.go:172] (0x7dac0e0) (3) Data frame handling I0917 16:31:53.357497 7 log.go:172] (0x780c380) (0x780c3f0) Stream removed, broadcasting: 1 I0917 16:31:53.358720 7 log.go:172] (0x780c380) Go away received I0917 16:31:53.361559 7 log.go:172] (0x780c380) (0x780c3f0) Stream removed, broadcasting: 1 I0917 16:31:53.362105 7 log.go:172] (0x780c380) (0x7dac0e0) Stream removed, broadcasting: 3 I0917 16:31:53.362511 7 log.go:172] (0x780c380) (0x780c5b0) Stream removed, broadcasting: 5 Sep 17 16:31:53.364: INFO: Waiting for responses: map[] Sep 17 16:31:53.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostname&protocol=http&host=10.244.2.167&port=8080&tries=1'] Namespace:pod-network-test-8887 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 16:31:53.370: INFO: >>> kubeConfig: /root/.kube/config I0917 16:31:53.473666 7 log.go:172] (0x780ca10) (0x780ca80) Create stream I0917 16:31:53.473815 7 log.go:172] (0x780ca10) (0x780ca80) Stream added, broadcasting: 1 I0917 16:31:53.479719 7 log.go:172] (0x780ca10) Reply frame received for 1 I0917 16:31:53.479901 7 log.go:172] (0x780ca10) (0x7dac700) Create stream I0917 16:31:53.479984 7 log.go:172] (0x780ca10) (0x7dac700) Stream added, broadcasting: 3 I0917 16:31:53.481476 7 log.go:172] (0x780ca10) Reply frame received for 3 I0917 16:31:53.481634 7 log.go:172] (0x780ca10) (0x780cc40) Create stream I0917 16:31:53.481705 7 log.go:172] (0x780ca10) (0x780cc40) Stream added, broadcasting: 5 I0917 16:31:53.482855 7 log.go:172] (0x780ca10) Reply frame received for 5 I0917 16:31:53.545552 7 log.go:172] (0x780ca10) Data frame received for 3 I0917 16:31:53.545702 7 log.go:172] (0x780ca10) Data frame received for 5 I0917 16:31:53.545868 7 log.go:172] (0x780cc40) (5) Data frame handling I0917 16:31:53.546043 7 log.go:172] (0x7dac700) (3) Data frame handling I0917 16:31:53.546181 7 log.go:172] (0x7dac700) (3) Data frame sent I0917 16:31:53.546286 7 log.go:172] (0x780ca10) Data frame received for 3 I0917 16:31:53.546390 7 log.go:172] (0x7dac700) (3) Data frame handling I0917 16:31:53.547835 7 log.go:172] (0x780ca10) Data frame received for 1 I0917 16:31:53.547984 7 log.go:172] (0x780ca80) (1) Data frame handling I0917 16:31:53.548119 7 log.go:172] (0x780ca80) (1) Data frame sent I0917 16:31:53.548416 7 log.go:172] (0x780ca10) (0x780ca80) Stream removed, broadcasting: 1 I0917 16:31:53.548617 7 log.go:172] (0x780ca10) Go away received I0917 16:31:53.549130 7 log.go:172] (0x780ca10) (0x780ca80) Stream removed, broadcasting: 1 I0917 16:31:53.549318 7 log.go:172] (0x780ca10) (0x7dac700) Stream removed, broadcasting: 3 I0917 16:31:53.549431 7 log.go:172] (0x780ca10) (0x780cc40) Stream removed, broadcasting: 5 Sep 17 16:31:53.549: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:53.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8887" for this suite. • [SLOW TEST:38.763 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:53.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2ea70fcc-932c-45a4-b7bc-8cb15d83114e STEP: Creating a pod to test consume configMaps Sep 17 16:31:53.646: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7" in namespace "projected-5973" to be "success or failure" Sep 17 16:31:53.660: INFO: Pod "pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.184785ms Sep 17 16:31:55.667: INFO: Pod "pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020904798s Sep 17 16:31:57.674: INFO: Pod "pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027762305s STEP: Saw pod success Sep 17 16:31:57.674: INFO: Pod "pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7" satisfied condition "success or failure" Sep 17 16:31:57.678: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7 container projected-configmap-volume-test: STEP: delete the pod Sep 17 16:31:57.708: INFO: Waiting for pod pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7 to disappear Sep 17 16:31:57.712: INFO: Pod pod-projected-configmaps-c4774e91-5b96-4125-84f5-0ef7866c6cf7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:31:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5973" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:31:57.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 17 16:31:57.836: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 17 16:32:02.846: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:32:02.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2415" for this suite. • [SLOW TEST:5.201 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":21,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:32:02.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:32:03.126: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 17 16:32:07.244: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 17 16:32:09.252: INFO: Creating deployment "test-rollover-deployment" Sep 17 16:32:09.289: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 17 16:32:11.307: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 17 16:32:11.318: INFO: Ensure that both replica sets have 1 created replica Sep 17 16:32:11.327: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 17 16:32:11.339: INFO: Updating deployment test-rollover-deployment Sep 17 16:32:11.340: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 17 16:32:13.423: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 17 16:32:13.441: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 17 16:32:13.452: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:13.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957131, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:15.468: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:15.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957134, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:17.465: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:17.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957134, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:19.819: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:19.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957134, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:21.468: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:21.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957134, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:23.468: INFO: all replica sets need to contain the pod-template-hash label Sep 17 16:32:23.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957134, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957129, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:32:25.468: INFO: Sep 17 16:32:25.468: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Sep 17 16:32:25.486: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7185 /apis/apps/v1/namespaces/deployment-7185/deployments/test-rollover-deployment 29356f5c-9d01-4036-a3eb-3562abfc5cd5 1062390 2 2020-09-17 16:32:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xa443a98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-17 16:32:09 +0000 UTC,LastTransitionTime:2020-09-17 16:32:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-09-17 16:32:25 +0000 UTC,LastTransitionTime:2020-09-17 16:32:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 17 16:32:25.494: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-7185 /apis/apps/v1/namespaces/deployment-7185/replicasets/test-rollover-deployment-574d6dfbff d4a302d1-ccc5-49b6-9f2b-e698b457d0f2 1062379 2 2020-09-17 16:32:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 29356f5c-9d01-4036-a3eb-3562abfc5cd5 0xa443f07 0xa443f08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xa443f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 17 16:32:25.494: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 17 16:32:25.495: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7185 /apis/apps/v1/namespaces/deployment-7185/replicasets/test-rollover-controller a7236472-eee3-4d1e-832f-255d32324250 1062389 2 2020-09-17 16:32:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 29356f5c-9d01-4036-a3eb-3562abfc5cd5 0xa443e27 0xa443e28}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xa443e98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 17 16:32:25.496: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7185 /apis/apps/v1/namespaces/deployment-7185/replicasets/test-rollover-deployment-f6c94f66c 4d525e63-113c-42bc-baad-ed9c0855a03c 1062298 2 2020-09-17 16:32:09 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 29356f5c-9d01-4036-a3eb-3562abfc5cd5 0xa443fe0 0xa443fe1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8fec058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 17 16:32:25.519: INFO: Pod "test-rollover-deployment-574d6dfbff-xkpjz" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-xkpjz test-rollover-deployment-574d6dfbff- deployment-7185 /api/v1/namespaces/deployment-7185/pods/test-rollover-deployment-574d6dfbff-xkpjz 2698746a-58c5-4dde-8432-6deaefaf621e 1062328 0 2020-09-17 16:32:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d4a302d1-ccc5-49b6-9f2b-e698b457d0f2 0x8fec547 0x8fec548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cjcqr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cjcqr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cjcqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:32:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:32:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:32:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.173,StartTime:2020-09-17 16:32:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 16:32:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://769a0039fb3374c2daf0f66d4e9491f489119b7f468e95124ca5f7060ab1d141,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:32:25.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7185" for this suite. • [SLOW TEST:22.601 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":22,"skipped":317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:32:25.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4831b43a-a59b-4afc-8e39-4fa0b30d57e4 STEP: Creating a pod to test consume secrets Sep 17 16:32:25.639: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea" in namespace "projected-9172" to be "success or failure" Sep 17 16:32:25.675: INFO: Pod "pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea": Phase="Pending", Reason="", readiness=false. Elapsed: 36.445315ms Sep 17 16:32:27.681: INFO: Pod "pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042379666s Sep 17 16:32:29.688: INFO: Pod "pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04944269s STEP: Saw pod success Sep 17 16:32:29.689: INFO: Pod "pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea" satisfied condition "success or failure" Sep 17 16:32:29.693: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea container projected-secret-volume-test: STEP: delete the pod Sep 17 16:32:29.731: INFO: Waiting for pod pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea to disappear Sep 17 16:32:29.743: INFO: Pod pod-projected-secrets-2b645aae-e08c-4902-a485-86cb32f389ea no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:32:29.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9172" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:32:29.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Sep 17 16:32:29.878: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Sep 17 16:32:29.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:31.681: INFO: stderr: "" Sep 17 16:32:31.682: INFO: stdout: "service/agnhost-slave created\n" Sep 17 16:32:31.683: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Sep 17 16:32:31.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:33.177: INFO: stderr: "" Sep 17 16:32:33.178: INFO: stdout: "service/agnhost-master created\n" Sep 17 16:32:33.179: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 17 16:32:33.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:34.674: INFO: stderr: "" Sep 17 16:32:34.675: INFO: stdout: "service/frontend created\n" Sep 17 16:32:34.680: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 17 16:32:34.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:36.137: INFO: stderr: "" Sep 17 16:32:36.137: INFO: stdout: "deployment.apps/frontend created\n" Sep 17 16:32:36.139: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 17 16:32:36.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:37.649: INFO: stderr: "" Sep 17 16:32:37.649: INFO: stdout: "deployment.apps/agnhost-master created\n" Sep 17 16:32:37.651: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 17 16:32:37.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Sep 17 16:32:40.239: INFO: stderr: "" Sep 17 16:32:40.239: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Sep 17 16:32:40.239: INFO: Waiting for all frontend pods to be Running. Sep 17 16:32:45.293: INFO: Waiting for frontend to serve content. Sep 17 16:32:46.344: INFO: Trying to add a new entry to the guestbook. Sep 17 16:32:46.385: INFO: Verifying that added entry can be retrieved. Sep 17 16:32:46.393: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Sep 17 16:32:51.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:52.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:52.530: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Sep 17 16:32:52.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:53.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:53.618: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Sep 17 16:32:53.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:54.782: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:54.782: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 17 16:32:54.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:55.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:55.892: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 17 16:32:55.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:57.090: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:57.091: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Sep 17 16:32:57.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9693' Sep 17 16:32:58.317: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:32:58.317: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:32:58.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9693" for this suite. • [SLOW TEST:28.951 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381 should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":24,"skipped":416,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:32:58.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:32:59.141: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 17 16:33:03.161: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Sep 17 16:33:07.208: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4967 /apis/apps/v1/namespaces/deployment-4967/deployments/test-cleanup-deployment 8acce84d-2f85-462f-8f28-8baeb5d3042e 1062796 1 2020-09-17 16:33:03 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8b076c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-17 16:33:03 +0000 UTC,LastTransitionTime:2020-09-17 16:33:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-09-17 16:33:06 +0000 UTC,LastTransitionTime:2020-09-17 16:33:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 17 16:33:07.215: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4967 /apis/apps/v1/namespaces/deployment-4967/replicasets/test-cleanup-deployment-55ffc6b7b6 9ffe06e6-c180-4b45-9820-9d0c56931f43 1062785 1 2020-09-17 16:33:03 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 8acce84d-2f85-462f-8f28-8baeb5d3042e 0x8b07a57 0x8b07a58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8b07ac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 17 16:33:07.223: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-cjldc" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-cjldc test-cleanup-deployment-55ffc6b7b6- deployment-4967 /api/v1/namespaces/deployment-4967/pods/test-cleanup-deployment-55ffc6b7b6-cjldc fd916fb4-caf2-491f-8cfb-1c841405a446 1062784 0 2020-09-17 16:33:03 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 9ffe06e6-c180-4b45-9820-9d0c56931f43 0x8d7fd57 0x8d7fd58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5hgpd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5hgpd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5hgpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:33:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:33:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:33:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 16:33:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.178,StartTime:2020-09-17 16:33:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 16:33:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://bdf5737c0ecbcf9b2997aac8016f3596597f75d166339118b339175c2b32325f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:07.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4967" for this suite. • [SLOW TEST:8.503 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":25,"skipped":432,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:07.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8078/configmap-test-00ae83ce-75fb-46ea-b9a8-7d0eafea33cb STEP: Creating a pod to test consume configMaps Sep 17 16:33:07.558: INFO: Waiting up to 5m0s for pod "pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68" in namespace "configmap-8078" to be "success or failure" Sep 17 16:33:07.566: INFO: Pod "pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.935263ms Sep 17 16:33:09.580: INFO: Pod "pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021510075s Sep 17 16:33:11.588: INFO: Pod "pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029508749s STEP: Saw pod success Sep 17 16:33:11.588: INFO: Pod "pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68" satisfied condition "success or failure" Sep 17 16:33:11.593: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68 container env-test: STEP: delete the pod Sep 17 16:33:11.627: INFO: Waiting for pod pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68 to disappear Sep 17 16:33:11.632: INFO: Pod pod-configmaps-41d75512-9ed5-4b05-ad60-5dbea509ca68 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:11.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8078" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":435,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:11.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:33:21.954: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:33:23.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957201, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957201, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957202, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957201, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:33:27.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 17 16:33:27.102: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:27.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3986" for this suite. STEP: Destroying namespace "webhook-3986-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":27,"skipped":450,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:27.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:33:27.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461" in namespace "projected-9861" to be "success or failure" Sep 17 16:33:27.419: INFO: Pod "downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461": Phase="Pending", Reason="", readiness=false. Elapsed: 11.368938ms Sep 17 16:33:29.425: INFO: Pod "downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017906086s Sep 17 16:33:31.432: INFO: Pod "downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024667302s STEP: Saw pod success Sep 17 16:33:31.432: INFO: Pod "downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461" satisfied condition "success or failure" Sep 17 16:33:31.438: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461 container client-container: STEP: delete the pod Sep 17 16:33:31.460: INFO: Waiting for pod downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461 to disappear Sep 17 16:33:31.465: INFO: Pod downwardapi-volume-027bb83c-ab3a-4d3b-a612-7833d3933461 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:31.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9861" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":450,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:31.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2b90fe69-c715-47aa-ae62-64b075ef486e STEP: Creating a pod to test consume configMaps Sep 17 16:33:31.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758" in namespace "configmap-5060" to be "success or failure" Sep 17 16:33:31.588: INFO: Pod "pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332314ms Sep 17 16:33:33.595: INFO: Pod "pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02233507s Sep 17 16:33:35.602: INFO: Pod "pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030129521s STEP: Saw pod success Sep 17 16:33:35.603: INFO: Pod "pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758" satisfied condition "success or failure" Sep 17 16:33:35.607: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758 container configmap-volume-test: STEP: delete the pod Sep 17 16:33:35.633: INFO: Waiting for pod pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758 to disappear Sep 17 16:33:36.495: INFO: Pod pod-configmaps-662f8c0e-8c39-4d63-ba1d-b61bd2368758 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5060" for this suite. • [SLOW TEST:5.052 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:36.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 17 16:33:36.688: INFO: Waiting up to 5m0s for pod "pod-f6280dcf-23c3-40ac-8490-958c5796f2c1" in namespace "emptydir-9339" to be "success or failure" Sep 17 16:33:36.724: INFO: Pod "pod-f6280dcf-23c3-40ac-8490-958c5796f2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.199751ms Sep 17 16:33:38.796: INFO: Pod "pod-f6280dcf-23c3-40ac-8490-958c5796f2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107164427s Sep 17 16:33:40.837: INFO: Pod "pod-f6280dcf-23c3-40ac-8490-958c5796f2c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148623008s STEP: Saw pod success Sep 17 16:33:40.837: INFO: Pod "pod-f6280dcf-23c3-40ac-8490-958c5796f2c1" satisfied condition "success or failure" Sep 17 16:33:40.865: INFO: Trying to get logs from node jerma-worker2 pod pod-f6280dcf-23c3-40ac-8490-958c5796f2c1 container test-container: STEP: delete the pod Sep 17 16:33:40.922: INFO: Waiting for pod pod-f6280dcf-23c3-40ac-8490-958c5796f2c1 to disappear Sep 17 16:33:40.926: INFO: Pod pod-f6280dcf-23c3-40ac-8490-958c5796f2c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9339" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":490,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:40.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:33:51.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:33:53.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957231, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957231, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957231, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957231, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:33:56.816: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:33:57.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2815" for this suite. STEP: Destroying namespace "webhook-2815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.267 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":31,"skipped":500,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:33:57.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Sep 17 16:33:57.282: INFO: Waiting up to 5m0s for pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508" in namespace "containers-8378" to be "success or failure" Sep 17 16:33:57.344: INFO: Pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508": Phase="Pending", Reason="", readiness=false. Elapsed: 61.860462ms Sep 17 16:33:59.447: INFO: Pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164741036s Sep 17 16:34:01.458: INFO: Pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175435992s Sep 17 16:34:03.466: INFO: Pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183633449s STEP: Saw pod success Sep 17 16:34:03.466: INFO: Pod "client-containers-bd106025-7b04-48be-8875-0e51cc8b1508" satisfied condition "success or failure" Sep 17 16:34:03.471: INFO: Trying to get logs from node jerma-worker2 pod client-containers-bd106025-7b04-48be-8875-0e51cc8b1508 container test-container: STEP: delete the pod Sep 17 16:34:03.491: INFO: Waiting for pod client-containers-bd106025-7b04-48be-8875-0e51cc8b1508 to disappear Sep 17 16:34:03.502: INFO: Pod client-containers-bd106025-7b04-48be-8875-0e51cc8b1508 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:34:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8378" for this suite. • [SLOW TEST:6.341 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":506,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:34:03.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5325 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5325 Sep 17 16:34:03.673: INFO: Found 0 stateful pods, waiting for 1 Sep 17 16:34:13.682: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 16:34:13.712: INFO: Deleting all statefulset in ns statefulset-5325 Sep 17 16:34:13.755: INFO: Scaling statefulset ss to 0 Sep 17 16:34:33.808: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 16:34:33.813: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:34:33.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5325" for this suite. • [SLOW TEST:30.290 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":33,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:34:33.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6237 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6237 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6237 Sep 17 16:34:34.647: INFO: Found 0 stateful pods, waiting for 1 Sep 17 16:34:44.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 17 16:34:44.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 16:34:46.118: INFO: stderr: "I0917 16:34:45.963287 543 log.go:172] (0x287c380) (0x287c3f0) Create stream\nI0917 16:34:45.966412 543 log.go:172] (0x287c380) (0x287c3f0) Stream added, broadcasting: 1\nI0917 16:34:45.985746 543 log.go:172] (0x287c380) Reply frame received for 1\nI0917 16:34:45.986291 543 log.go:172] (0x287c380) (0x2c320e0) Create stream\nI0917 16:34:45.986373 543 log.go:172] (0x287c380) (0x2c320e0) Stream added, broadcasting: 3\nI0917 16:34:45.987884 543 log.go:172] (0x287c380) Reply frame received for 3\nI0917 16:34:45.988279 543 log.go:172] (0x287c380) (0x2c322a0) Create stream\nI0917 16:34:45.988392 543 log.go:172] (0x287c380) (0x2c322a0) Stream added, broadcasting: 5\nI0917 16:34:45.989762 543 log.go:172] (0x287c380) Reply frame received for 5\nI0917 16:34:46.062769 543 log.go:172] (0x287c380) Data frame received for 5\nI0917 16:34:46.063002 543 log.go:172] (0x2c322a0) (5) Data frame handling\nI0917 16:34:46.063441 543 log.go:172] (0x2c322a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 16:34:46.099699 543 log.go:172] (0x287c380) Data frame received for 3\nI0917 16:34:46.099899 543 log.go:172] (0x2c320e0) (3) Data frame handling\nI0917 16:34:46.100066 543 log.go:172] (0x2c320e0) (3) Data frame sent\nI0917 16:34:46.100278 543 log.go:172] (0x287c380) Data frame received for 3\nI0917 16:34:46.100451 543 log.go:172] (0x2c320e0) (3) Data frame handling\nI0917 16:34:46.100770 543 log.go:172] (0x287c380) Data frame received for 5\nI0917 16:34:46.100972 543 log.go:172] (0x2c322a0) (5) Data frame handling\nI0917 16:34:46.102336 543 log.go:172] (0x287c380) Data frame received for 1\nI0917 16:34:46.102442 543 log.go:172] (0x287c3f0) (1) Data frame handling\nI0917 16:34:46.102579 543 log.go:172] (0x287c3f0) (1) Data frame sent\nI0917 16:34:46.103985 543 log.go:172] (0x287c380) (0x287c3f0) Stream removed, broadcasting: 1\nI0917 16:34:46.107248 543 log.go:172] (0x287c380) Go away received\nI0917 16:34:46.110155 543 log.go:172] (0x287c380) (0x287c3f0) Stream removed, broadcasting: 1\nI0917 16:34:46.110476 543 log.go:172] (0x287c380) (0x2c320e0) Stream removed, broadcasting: 3\nI0917 16:34:46.110992 543 log.go:172] (0x287c380) (0x2c322a0) Stream removed, broadcasting: 5\n" Sep 17 16:34:46.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 16:34:46.120: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 16:34:46.127: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 17 16:34:56.136: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 17 16:34:56.136: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 16:34:56.169: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99994002s Sep 17 16:34:57.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980575442s Sep 17 16:34:58.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972491481s Sep 17 16:34:59.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964874641s Sep 17 16:35:00.199: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.958532751s Sep 17 16:35:01.210: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951090035s Sep 17 16:35:02.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.939796404s Sep 17 16:35:03.225: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.932961806s Sep 17 16:35:04.232: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.925160982s Sep 17 16:35:05.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 917.51926ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6237 Sep 17 16:35:06.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:35:07.616: INFO: stderr: "I0917 16:35:07.510842 567 log.go:172] (0x28f9260) (0x28f92d0) Create stream\nI0917 16:35:07.513259 567 log.go:172] (0x28f9260) (0x28f92d0) Stream added, broadcasting: 1\nI0917 16:35:07.527670 567 log.go:172] (0x28f9260) Reply frame received for 1\nI0917 16:35:07.528439 567 log.go:172] (0x28f9260) (0x270bdc0) Create stream\nI0917 16:35:07.528538 567 log.go:172] (0x28f9260) (0x270bdc0) Stream added, broadcasting: 3\nI0917 16:35:07.529947 567 log.go:172] (0x28f9260) Reply frame received for 3\nI0917 16:35:07.530244 567 log.go:172] (0x28f9260) (0x25fe3f0) Create stream\nI0917 16:35:07.530315 567 log.go:172] (0x28f9260) (0x25fe3f0) Stream added, broadcasting: 5\nI0917 16:35:07.531787 567 log.go:172] (0x28f9260) Reply frame received for 5\nI0917 16:35:07.600100 567 log.go:172] (0x28f9260) Data frame received for 3\nI0917 16:35:07.600341 567 log.go:172] (0x28f9260) Data frame received for 1\nI0917 16:35:07.600525 567 log.go:172] (0x28f9260) Data frame received for 5\nI0917 16:35:07.600676 567 log.go:172] (0x28f92d0) (1) Data frame handling\nI0917 16:35:07.600789 567 log.go:172] (0x270bdc0) (3) Data frame handling\nI0917 16:35:07.601050 567 log.go:172] (0x25fe3f0) (5) Data frame handling\nI0917 16:35:07.601916 567 log.go:172] (0x270bdc0) (3) Data frame sent\nI0917 16:35:07.602131 567 log.go:172] (0x25fe3f0) (5) Data frame sent\nI0917 16:35:07.602721 567 log.go:172] (0x28f9260) Data frame received for 3\nI0917 16:35:07.602907 567 log.go:172] (0x270bdc0) (3) Data frame handling\nI0917 16:35:07.603164 567 log.go:172] (0x28f92d0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 16:35:07.603362 567 log.go:172] (0x28f9260) Data frame received for 5\nI0917 16:35:07.603434 567 log.go:172] (0x25fe3f0) (5) Data frame handling\nI0917 16:35:07.604405 567 log.go:172] (0x28f9260) (0x28f92d0) Stream removed, broadcasting: 1\nI0917 16:35:07.606567 567 log.go:172] (0x28f9260) Go away received\nI0917 16:35:07.608000 567 log.go:172] (0x28f9260) (0x28f92d0) Stream removed, broadcasting: 1\nI0917 16:35:07.608430 567 log.go:172] (0x28f9260) (0x270bdc0) Stream removed, broadcasting: 3\nI0917 16:35:07.608615 567 log.go:172] (0x28f9260) (0x25fe3f0) Stream removed, broadcasting: 5\n" Sep 17 16:35:07.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 16:35:07.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 16:35:07.622: INFO: Found 1 stateful pods, waiting for 3 Sep 17 16:35:17.645: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 16:35:17.645: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 16:35:17.645: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 17 16:35:17.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 16:35:19.073: INFO: stderr: "I0917 16:35:18.939677 590 log.go:172] (0x294c000) (0x294c070) Create stream\nI0917 16:35:18.942531 590 log.go:172] (0x294c000) (0x294c070) Stream added, broadcasting: 1\nI0917 16:35:18.961201 590 log.go:172] (0x294c000) Reply frame received for 1\nI0917 16:35:18.961623 590 log.go:172] (0x294c000) (0x28e0070) Create stream\nI0917 16:35:18.961689 590 log.go:172] (0x294c000) (0x28e0070) Stream added, broadcasting: 3\nI0917 16:35:18.963307 590 log.go:172] (0x294c000) Reply frame received for 3\nI0917 16:35:18.963624 590 log.go:172] (0x294c000) (0x24aa8c0) Create stream\nI0917 16:35:18.963731 590 log.go:172] (0x294c000) (0x24aa8c0) Stream added, broadcasting: 5\nI0917 16:35:18.965074 590 log.go:172] (0x294c000) Reply frame received for 5\nI0917 16:35:19.055971 590 log.go:172] (0x294c000) Data frame received for 3\nI0917 16:35:19.056435 590 log.go:172] (0x28e0070) (3) Data frame handling\nI0917 16:35:19.057027 590 log.go:172] (0x28e0070) (3) Data frame sent\nI0917 16:35:19.057976 590 log.go:172] (0x294c000) Data frame received for 5\nI0917 16:35:19.058114 590 log.go:172] (0x24aa8c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 16:35:19.058233 590 log.go:172] (0x294c000) Data frame received for 3\nI0917 16:35:19.058373 590 log.go:172] (0x294c000) Data frame received for 1\nI0917 16:35:19.058497 590 log.go:172] (0x294c070) (1) Data frame handling\nI0917 16:35:19.058839 590 log.go:172] (0x294c070) (1) Data frame sent\nI0917 16:35:19.058951 590 log.go:172] (0x28e0070) (3) Data frame handling\nI0917 16:35:19.059230 590 log.go:172] (0x24aa8c0) (5) Data frame sent\nI0917 16:35:19.059366 590 log.go:172] (0x294c000) Data frame received for 5\nI0917 16:35:19.059488 590 log.go:172] (0x24aa8c0) (5) Data frame handling\nI0917 16:35:19.061103 590 log.go:172] (0x294c000) (0x294c070) Stream removed, broadcasting: 1\nI0917 16:35:19.062594 590 log.go:172] (0x294c000) Go away received\nI0917 16:35:19.065002 590 log.go:172] (0x294c000) (0x294c070) Stream removed, broadcasting: 1\nI0917 16:35:19.065178 590 log.go:172] (0x294c000) (0x28e0070) Stream removed, broadcasting: 3\nI0917 16:35:19.065335 590 log.go:172] (0x294c000) (0x24aa8c0) Stream removed, broadcasting: 5\n" Sep 17 16:35:19.074: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 16:35:19.074: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 16:35:19.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 16:35:20.454: INFO: stderr: "I0917 16:35:20.316321 616 log.go:172] (0x28f6700) (0x28f6770) Create stream\nI0917 16:35:20.320307 616 log.go:172] (0x28f6700) (0x28f6770) Stream added, broadcasting: 1\nI0917 16:35:20.338482 616 log.go:172] (0x28f6700) Reply frame received for 1\nI0917 16:35:20.339085 616 log.go:172] (0x28f6700) (0x24ba380) Create stream\nI0917 16:35:20.339169 616 log.go:172] (0x28f6700) (0x24ba380) Stream added, broadcasting: 3\nI0917 16:35:20.340544 616 log.go:172] (0x28f6700) Reply frame received for 3\nI0917 16:35:20.340795 616 log.go:172] (0x28f6700) (0x24bad90) Create stream\nI0917 16:35:20.340857 616 log.go:172] (0x28f6700) (0x24bad90) Stream added, broadcasting: 5\nI0917 16:35:20.341964 616 log.go:172] (0x28f6700) Reply frame received for 5\nI0917 16:35:20.400673 616 log.go:172] (0x28f6700) Data frame received for 5\nI0917 16:35:20.400899 616 log.go:172] (0x24bad90) (5) Data frame handling\nI0917 16:35:20.401260 616 log.go:172] (0x24bad90) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 16:35:20.434837 616 log.go:172] (0x28f6700) Data frame received for 5\nI0917 16:35:20.435023 616 log.go:172] (0x24bad90) (5) Data frame handling\nI0917 16:35:20.435276 616 log.go:172] (0x28f6700) Data frame received for 3\nI0917 16:35:20.435448 616 log.go:172] (0x24ba380) (3) Data frame handling\nI0917 16:35:20.435605 616 log.go:172] (0x24ba380) (3) Data frame sent\nI0917 16:35:20.435736 616 log.go:172] (0x28f6700) Data frame received for 3\nI0917 16:35:20.435834 616 log.go:172] (0x24ba380) (3) Data frame handling\nI0917 16:35:20.436269 616 log.go:172] (0x28f6700) Data frame received for 1\nI0917 16:35:20.436483 616 log.go:172] (0x28f6770) (1) Data frame handling\nI0917 16:35:20.436643 616 log.go:172] (0x28f6770) (1) Data frame sent\nI0917 16:35:20.438014 616 log.go:172] (0x28f6700) (0x28f6770) Stream removed, broadcasting: 1\nI0917 16:35:20.440188 616 log.go:172] (0x28f6700) Go away received\nI0917 16:35:20.444372 616 log.go:172] (0x28f6700) (0x28f6770) Stream removed, broadcasting: 1\nI0917 16:35:20.444598 616 log.go:172] (0x28f6700) (0x24ba380) Stream removed, broadcasting: 3\nI0917 16:35:20.444783 616 log.go:172] (0x28f6700) (0x24bad90) Stream removed, broadcasting: 5\n" Sep 17 16:35:20.455: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 16:35:20.455: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 16:35:20.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 16:35:21.872: INFO: stderr: "I0917 16:35:21.703066 639 log.go:172] (0x289c070) (0x289c0e0) Create stream\nI0917 16:35:21.707866 639 log.go:172] (0x289c070) (0x289c0e0) Stream added, broadcasting: 1\nI0917 16:35:21.721003 639 log.go:172] (0x289c070) Reply frame received for 1\nI0917 16:35:21.721468 639 log.go:172] (0x289c070) (0x289c3f0) Create stream\nI0917 16:35:21.721563 639 log.go:172] (0x289c070) (0x289c3f0) Stream added, broadcasting: 3\nI0917 16:35:21.723299 639 log.go:172] (0x289c070) Reply frame received for 3\nI0917 16:35:21.723494 639 log.go:172] (0x289c070) (0x2960070) Create stream\nI0917 16:35:21.723557 639 log.go:172] (0x289c070) (0x2960070) Stream added, broadcasting: 5\nI0917 16:35:21.724913 639 log.go:172] (0x289c070) Reply frame received for 5\nI0917 16:35:21.803823 639 log.go:172] (0x289c070) Data frame received for 5\nI0917 16:35:21.804094 639 log.go:172] (0x2960070) (5) Data frame handling\nI0917 16:35:21.804638 639 log.go:172] (0x2960070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 16:35:21.855398 639 log.go:172] (0x289c070) Data frame received for 5\nI0917 16:35:21.855712 639 log.go:172] (0x2960070) (5) Data frame handling\nI0917 16:35:21.855962 639 log.go:172] (0x289c070) Data frame received for 3\nI0917 16:35:21.856269 639 log.go:172] (0x289c3f0) (3) Data frame handling\nI0917 16:35:21.856480 639 log.go:172] (0x289c3f0) (3) Data frame sent\nI0917 16:35:21.856650 639 log.go:172] (0x289c070) Data frame received for 3\nI0917 16:35:21.856805 639 log.go:172] (0x289c3f0) (3) Data frame handling\nI0917 16:35:21.858747 639 log.go:172] (0x289c070) Data frame received for 1\nI0917 16:35:21.858872 639 log.go:172] (0x289c0e0) (1) Data frame handling\nI0917 16:35:21.858983 639 log.go:172] (0x289c0e0) (1) Data frame sent\nI0917 16:35:21.859687 639 log.go:172] (0x289c070) (0x289c0e0) Stream removed, broadcasting: 1\nI0917 16:35:21.862088 639 log.go:172] (0x289c070) Go away received\nI0917 16:35:21.864374 639 log.go:172] (0x289c070) (0x289c0e0) Stream removed, broadcasting: 1\nI0917 16:35:21.864850 639 log.go:172] (0x289c070) (0x289c3f0) Stream removed, broadcasting: 3\nI0917 16:35:21.865043 639 log.go:172] (0x289c070) (0x2960070) Stream removed, broadcasting: 5\n" Sep 17 16:35:21.873: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 16:35:21.873: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 16:35:21.873: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 16:35:21.879: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 17 16:35:31.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 17 16:35:31.898: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 17 16:35:31.899: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 17 16:35:31.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999988443s Sep 17 16:35:32.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994082198s Sep 17 16:35:33.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9846377s Sep 17 16:35:34.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975611419s Sep 17 16:35:35.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966108s Sep 17 16:35:36.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9566168s Sep 17 16:35:37.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947392429s Sep 17 16:35:38.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.938287577s Sep 17 16:35:39.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.92934029s Sep 17 16:35:40.994: INFO: Verifying statefulset ss doesn't scale past 3 for another 920.454833ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6237 Sep 17 16:35:42.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:35:43.383: INFO: stderr: "I0917 16:35:43.250086 662 log.go:172] (0x28301c0) (0x2830230) Create stream\nI0917 16:35:43.254953 662 log.go:172] (0x28301c0) (0x2830230) Stream added, broadcasting: 1\nI0917 16:35:43.271642 662 log.go:172] (0x28301c0) Reply frame received for 1\nI0917 16:35:43.272381 662 log.go:172] (0x28301c0) (0x25c43f0) Create stream\nI0917 16:35:43.272500 662 log.go:172] (0x28301c0) (0x25c43f0) Stream added, broadcasting: 3\nI0917 16:35:43.273775 662 log.go:172] (0x28301c0) Reply frame received for 3\nI0917 16:35:43.274028 662 log.go:172] (0x28301c0) (0x2844a10) Create stream\nI0917 16:35:43.274103 662 log.go:172] (0x28301c0) (0x2844a10) Stream added, broadcasting: 5\nI0917 16:35:43.275236 662 log.go:172] (0x28301c0) Reply frame received for 5\nI0917 16:35:43.366801 662 log.go:172] (0x28301c0) Data frame received for 5\nI0917 16:35:43.367142 662 log.go:172] (0x2844a10) (5) Data frame handling\nI0917 16:35:43.367589 662 log.go:172] (0x28301c0) Data frame received for 3\nI0917 16:35:43.367784 662 log.go:172] (0x25c43f0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 16:35:43.367962 662 log.go:172] (0x2844a10) (5) Data frame sent\nI0917 16:35:43.368223 662 log.go:172] (0x25c43f0) (3) Data frame sent\nI0917 16:35:43.368438 662 log.go:172] (0x28301c0) Data frame received for 1\nI0917 16:35:43.368553 662 log.go:172] (0x2830230) (1) Data frame handling\nI0917 16:35:43.368629 662 log.go:172] (0x28301c0) Data frame received for 5\nI0917 16:35:43.368752 662 log.go:172] (0x2844a10) (5) Data frame handling\nI0917 16:35:43.368900 662 log.go:172] (0x28301c0) Data frame received for 3\nI0917 16:35:43.368992 662 log.go:172] (0x25c43f0) (3) Data frame handling\nI0917 16:35:43.369063 662 log.go:172] (0x2830230) (1) Data frame sent\nI0917 16:35:43.369876 662 log.go:172] (0x28301c0) (0x2830230) Stream removed, broadcasting: 1\nI0917 16:35:43.372230 662 log.go:172] (0x28301c0) Go away received\nI0917 16:35:43.374908 662 log.go:172] (0x28301c0) (0x2830230) Stream removed, broadcasting: 1\nI0917 16:35:43.375368 662 log.go:172] (0x28301c0) (0x25c43f0) Stream removed, broadcasting: 3\nI0917 16:35:43.375761 662 log.go:172] (0x28301c0) (0x2844a10) Stream removed, broadcasting: 5\n" Sep 17 16:35:43.384: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 16:35:43.384: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 16:35:43.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:35:44.730: INFO: stderr: "I0917 16:35:44.617838 684 log.go:172] (0x2936c40) (0x2936cb0) Create stream\nI0917 16:35:44.621762 684 log.go:172] (0x2936c40) (0x2936cb0) Stream added, broadcasting: 1\nI0917 16:35:44.637698 684 log.go:172] (0x2936c40) Reply frame received for 1\nI0917 16:35:44.638135 684 log.go:172] (0x2936c40) (0x26fd5e0) Create stream\nI0917 16:35:44.638205 684 log.go:172] (0x2936c40) (0x26fd5e0) Stream added, broadcasting: 3\nI0917 16:35:44.639833 684 log.go:172] (0x2936c40) Reply frame received for 3\nI0917 16:35:44.640081 684 log.go:172] (0x2936c40) (0x25e6620) Create stream\nI0917 16:35:44.640183 684 log.go:172] (0x2936c40) (0x25e6620) Stream added, broadcasting: 5\nI0917 16:35:44.641286 684 log.go:172] (0x2936c40) Reply frame received for 5\nI0917 16:35:44.713970 684 log.go:172] (0x2936c40) Data frame received for 3\nI0917 16:35:44.714250 684 log.go:172] (0x2936c40) Data frame received for 5\nI0917 16:35:44.714456 684 log.go:172] (0x2936c40) Data frame received for 1\nI0917 16:35:44.714589 684 log.go:172] (0x2936cb0) (1) Data frame handling\nI0917 16:35:44.714682 684 log.go:172] (0x25e6620) (5) Data frame handling\nI0917 16:35:44.714941 684 log.go:172] (0x26fd5e0) (3) Data frame handling\nI0917 16:35:44.715779 684 log.go:172] (0x2936cb0) (1) Data frame sent\nI0917 16:35:44.716453 684 log.go:172] (0x26fd5e0) (3) Data frame sent\nI0917 16:35:44.716530 684 log.go:172] (0x2936c40) Data frame received for 3\nI0917 16:35:44.716776 684 log.go:172] (0x25e6620) (5) Data frame sent\nI0917 16:35:44.716968 684 log.go:172] (0x2936c40) Data frame received for 5\nI0917 16:35:44.717114 684 log.go:172] (0x2936c40) (0x2936cb0) Stream removed, broadcasting: 1\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 16:35:44.717807 684 log.go:172] (0x26fd5e0) (3) Data frame handling\nI0917 16:35:44.718376 684 log.go:172] (0x25e6620) (5) Data frame handling\nI0917 16:35:44.720631 684 log.go:172] (0x2936c40) Go away received\nI0917 16:35:44.723046 684 log.go:172] (0x2936c40) (0x2936cb0) Stream removed, broadcasting: 1\nI0917 16:35:44.723226 684 log.go:172] (0x2936c40) (0x26fd5e0) Stream removed, broadcasting: 3\nI0917 16:35:44.723375 684 log.go:172] (0x2936c40) (0x25e6620) Stream removed, broadcasting: 5\n" Sep 17 16:35:44.731: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 16:35:44.732: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 16:35:44.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:35:46.556: INFO: rc: 1 Sep 17 16:35:46.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0917 16:35:46.443348 707 log.go:172] (0x28b40e0) (0x28b4150) Create stream I0917 16:35:46.447810 707 log.go:172] (0x28b40e0) (0x28b4150) Stream added, broadcasting: 1 I0917 16:35:46.466923 707 log.go:172] (0x28b40e0) Reply frame received for 1 I0917 16:35:46.467435 707 log.go:172] (0x28b40e0) (0x28b42a0) Create stream I0917 16:35:46.467516 707 log.go:172] (0x28b40e0) (0x28b42a0) Stream added, broadcasting: 3 I0917 16:35:46.469120 707 log.go:172] (0x28b40e0) Reply frame received for 3 I0917 16:35:46.469367 707 log.go:172] (0x28b40e0) (0x2b2c070) Create stream I0917 16:35:46.469441 707 log.go:172] (0x28b40e0) (0x2b2c070) Stream added, broadcasting: 5 I0917 16:35:46.470476 707 log.go:172] (0x28b40e0) Reply frame received for 5 I0917 16:35:46.539179 707 log.go:172] (0x28b40e0) Data frame received for 1 I0917 16:35:46.539529 707 log.go:172] (0x28b4150) (1) Data frame handling I0917 16:35:46.539955 707 log.go:172] (0x28b4150) (1) Data frame sent I0917 16:35:46.541200 707 log.go:172] (0x28b40e0) (0x28b42a0) Stream removed, broadcasting: 3 I0917 16:35:46.543625 707 log.go:172] (0x28b40e0) (0x2b2c070) Stream removed, broadcasting: 5 I0917 16:35:46.543890 707 log.go:172] (0x28b40e0) (0x28b4150) Stream removed, broadcasting: 1 I0917 16:35:46.545224 707 log.go:172] (0x28b40e0) Go away received I0917 16:35:46.547935 707 log.go:172] (0x28b40e0) (0x28b4150) Stream removed, broadcasting: 1 I0917 16:35:46.548182 707 log.go:172] (0x28b40e0) (0x28b42a0) Stream removed, broadcasting: 3 I0917 16:35:46.548263 707 log.go:172] (0x28b40e0) (0x2b2c070) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "fb916dc9bc2f785e11d229b882826c30564149248e350dd8927e8a50bae7c78a": task afd635b8441d5a5618ca4d993a7ec665b48613402d0b4a984509129f1f16fe46 not found: not found error: exit status 1 Sep 17 16:35:56.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:35:57.793: INFO: rc: 1 Sep 17 16:35:57.793: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:36:07.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:36:08.942: INFO: rc: 1 Sep 17 16:36:08.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:36:18.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:36:20.047: INFO: rc: 1 Sep 17 16:36:20.048: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:36:30.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:36:31.205: INFO: rc: 1 Sep 17 16:36:31.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:36:41.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:36:42.276: INFO: rc: 1 Sep 17 16:36:42.276: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:36:52.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:36:53.354: INFO: rc: 1 Sep 17 16:36:53.354: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:37:04.446: INFO: rc: 1 Sep 17 16:37:04.446: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:14.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:37:15.545: INFO: rc: 1 Sep 17 16:37:15.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:25.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:37:26.653: INFO: rc: 1 Sep 17 16:37:26.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:36.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:37:37.778: INFO: rc: 1 Sep 17 16:37:37.778: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:47.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:37:48.897: INFO: rc: 1 Sep 17 16:37:48.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:37:58.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:00.011: INFO: rc: 1 Sep 17 16:38:00.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:38:10.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:11.157: INFO: rc: 1 Sep 17 16:38:11.157: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:38:21.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:22.265: INFO: rc: 1 Sep 17 16:38:22.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:38:32.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:33.336: INFO: rc: 1 Sep 17 16:38:33.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:38:43.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:44.454: INFO: rc: 1 Sep 17 16:38:44.455: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:38:54.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:38:55.545: INFO: rc: 1 Sep 17 16:38:55.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:39:05.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:39:06.639: INFO: rc: 1 Sep 17 16:39:06.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:39:16.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:39:17.708: INFO: rc: 1 Sep 17 16:39:17.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:39:27.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:39:31.560: INFO: rc: 1 Sep 17 16:39:31.560: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:39:41.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:39:42.688: INFO: rc: 1 Sep 17 16:39:42.688: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:39:52.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:39:53.771: INFO: rc: 1 Sep 17 16:39:53.772: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:40:03.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:40:04.903: INFO: rc: 1 Sep 17 16:40:04.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:40:14.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:40:15.998: INFO: rc: 1 Sep 17 16:40:15.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:40:25.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:40:27.084: INFO: rc: 1 Sep 17 16:40:27.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:40:37.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:40:38.188: INFO: rc: 1 Sep 17 16:40:38.188: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 17 16:40:48.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6237 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 16:40:49.292: INFO: rc: 1 Sep 17 16:40:49.293: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Sep 17 16:40:49.293: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 16:40:49.313: INFO: Deleting all statefulset in ns statefulset-6237 Sep 17 16:40:49.316: INFO: Scaling statefulset ss to 0 Sep 17 16:40:49.326: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 16:40:49.328: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:40:49.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6237" for this suite. • [SLOW TEST:375.531 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":34,"skipped":536,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:40:49.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:41:03.831: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:41:05.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:41:07.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957663, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:41:10.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:41:23.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8458" for this suite. STEP: Destroying namespace "webhook-8458-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:33.915 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":35,"skipped":544,"failed":0} [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:41:23.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:41:23.368: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:41:27.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5056" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:41:27.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 17 16:41:27.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-resource-version 824a66d2-12ad-4d3e-bb25-94812dddc468 1065510 0 2020-09-17 16:41:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 17 16:41:27.769: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-resource-version 824a66d2-12ad-4d3e-bb25-94812dddc468 1065511 0 2020-09-17 16:41:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:41:27.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1420" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":37,"skipped":562,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:41:27.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 17 16:41:27.892: INFO: Waiting up to 5m0s for pod "pod-444f2639-5b79-4079-abe0-9be68337b47f" in namespace "emptydir-4397" to be "success or failure" Sep 17 16:41:27.899: INFO: Pod "pod-444f2639-5b79-4079-abe0-9be68337b47f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.205982ms Sep 17 16:41:29.905: INFO: Pod "pod-444f2639-5b79-4079-abe0-9be68337b47f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013415313s Sep 17 16:41:31.912: INFO: Pod "pod-444f2639-5b79-4079-abe0-9be68337b47f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019779783s STEP: Saw pod success Sep 17 16:41:31.912: INFO: Pod "pod-444f2639-5b79-4079-abe0-9be68337b47f" satisfied condition "success or failure" Sep 17 16:41:31.916: INFO: Trying to get logs from node jerma-worker pod pod-444f2639-5b79-4079-abe0-9be68337b47f container test-container: STEP: delete the pod Sep 17 16:41:31.999: INFO: Waiting for pod pod-444f2639-5b79-4079-abe0-9be68337b47f to disappear Sep 17 16:41:32.025: INFO: Pod pod-444f2639-5b79-4079-abe0-9be68337b47f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:41:32.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4397" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:41:32.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:41:32.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 17 16:41:32.769: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:32Z generation:1 name:name1 resourceVersion:1065575 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:91ae8bc0-6b49-43d0-9977-d279eeca9e6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 17 16:41:42.778: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:42Z generation:1 name:name2 resourceVersion:1065621 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cc3029f-5f9c-400e-990d-57e935524950] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 17 16:41:52.790: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:32Z generation:2 name:name1 resourceVersion:1065651 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:91ae8bc0-6b49-43d0-9977-d279eeca9e6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 17 16:42:02.801: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:42Z generation:2 name:name2 resourceVersion:1065681 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cc3029f-5f9c-400e-990d-57e935524950] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 17 16:42:12.830: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:32Z generation:2 name:name1 resourceVersion:1065714 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:91ae8bc0-6b49-43d0-9977-d279eeca9e6e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 17 16:42:22.842: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-17T16:41:42Z generation:2 name:name2 resourceVersion:1065742 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cc3029f-5f9c-400e-990d-57e935524950] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:42:34.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8875" for this suite. • [SLOW TEST:62.330 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":39,"skipped":614,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:42:34.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-2cb69996-b7fd-4537-928e-f9a09b28b990 STEP: Creating a pod to test consume configMaps Sep 17 16:42:34.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1" in namespace "configmap-1690" to be "success or failure" Sep 17 16:42:34.474: INFO: Pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678901ms Sep 17 16:42:36.540: INFO: Pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075088443s Sep 17 16:42:38.547: INFO: Pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.081704644s Sep 17 16:42:40.552: INFO: Pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0868655s STEP: Saw pod success Sep 17 16:42:40.552: INFO: Pod "pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1" satisfied condition "success or failure" Sep 17 16:42:40.555: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1 container configmap-volume-test: STEP: delete the pod Sep 17 16:42:40.610: INFO: Waiting for pod pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1 to disappear Sep 17 16:42:40.613: INFO: Pod pod-configmaps-0d3d617d-98a8-4e6e-91f5-e67b77ada0e1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:42:40.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1690" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:42:40.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2093 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2093 I0917 16:42:40.812653 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2093, replica count: 2 I0917 16:42:43.865913 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0917 16:42:46.868264 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 17 16:42:46.869: INFO: Creating new exec pod Sep 17 16:42:51.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2093 execpodzrqvd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 17 16:42:53.255: INFO: stderr: "I0917 16:42:53.134433 1341 log.go:172] (0x29121c0) (0x2912230) Create stream\nI0917 16:42:53.136549 1341 log.go:172] (0x29121c0) (0x2912230) Stream added, broadcasting: 1\nI0917 16:42:53.152270 1341 log.go:172] (0x29121c0) Reply frame received for 1\nI0917 16:42:53.152803 1341 log.go:172] (0x29121c0) (0x25e8a10) Create stream\nI0917 16:42:53.152886 1341 log.go:172] (0x29121c0) (0x25e8a10) Stream added, broadcasting: 3\nI0917 16:42:53.154214 1341 log.go:172] (0x29121c0) Reply frame received for 3\nI0917 16:42:53.154567 1341 log.go:172] (0x29121c0) (0x2715490) Create stream\nI0917 16:42:53.154642 1341 log.go:172] (0x29121c0) (0x2715490) Stream added, broadcasting: 5\nI0917 16:42:53.155889 1341 log.go:172] (0x29121c0) Reply frame received for 5\nI0917 16:42:53.237789 1341 log.go:172] (0x29121c0) Data frame received for 5\nI0917 16:42:53.238285 1341 log.go:172] (0x2715490) (5) Data frame handling\nI0917 16:42:53.238644 1341 log.go:172] (0x29121c0) Data frame received for 3\nI0917 16:42:53.238824 1341 log.go:172] (0x25e8a10) (3) Data frame handling\nI0917 16:42:53.238969 1341 log.go:172] (0x29121c0) Data frame received for 1\nI0917 16:42:53.239156 1341 log.go:172] (0x2912230) (1) Data frame handling\nI0917 16:42:53.240462 1341 log.go:172] (0x2912230) (1) Data frame sent\nI0917 16:42:53.240675 1341 log.go:172] (0x2715490) (5) Data frame sent\nI0917 16:42:53.240846 1341 log.go:172] (0x29121c0) Data frame received for 5\nI0917 16:42:53.240983 1341 log.go:172] (0x2715490) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0917 16:42:53.242016 1341 log.go:172] (0x29121c0) (0x2912230) Stream removed, broadcasting: 1\nI0917 16:42:53.243757 1341 log.go:172] (0x2715490) (5) Data frame sent\nI0917 16:42:53.244521 1341 log.go:172] (0x29121c0) Data frame received for 5\nI0917 16:42:53.244607 1341 log.go:172] (0x2715490) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0917 16:42:53.245799 1341 log.go:172] (0x29121c0) Go away received\nI0917 16:42:53.248349 1341 log.go:172] (0x29121c0) (0x2912230) Stream removed, broadcasting: 1\nI0917 16:42:53.248537 1341 log.go:172] (0x29121c0) (0x25e8a10) Stream removed, broadcasting: 3\nI0917 16:42:53.248668 1341 log.go:172] (0x29121c0) (0x2715490) Stream removed, broadcasting: 5\n" Sep 17 16:42:53.256: INFO: stdout: "" Sep 17 16:42:53.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2093 execpodzrqvd -- /bin/sh -x -c nc -zv -t -w 2 10.102.20.186 80' Sep 17 16:42:54.630: INFO: stderr: "I0917 16:42:54.510787 1365 log.go:172] (0x29384d0) (0x2938540) Create stream\nI0917 16:42:54.514249 1365 log.go:172] (0x29384d0) (0x2938540) Stream added, broadcasting: 1\nI0917 16:42:54.529154 1365 log.go:172] (0x29384d0) Reply frame received for 1\nI0917 16:42:54.529729 1365 log.go:172] (0x29384d0) (0x2cc2070) Create stream\nI0917 16:42:54.529818 1365 log.go:172] (0x29384d0) (0x2cc2070) Stream added, broadcasting: 3\nI0917 16:42:54.531221 1365 log.go:172] (0x29384d0) Reply frame received for 3\nI0917 16:42:54.531449 1365 log.go:172] (0x29384d0) (0x2cc22a0) Create stream\nI0917 16:42:54.531516 1365 log.go:172] (0x29384d0) (0x2cc22a0) Stream added, broadcasting: 5\nI0917 16:42:54.532696 1365 log.go:172] (0x29384d0) Reply frame received for 5\nI0917 16:42:54.611349 1365 log.go:172] (0x29384d0) Data frame received for 3\nI0917 16:42:54.611588 1365 log.go:172] (0x29384d0) Data frame received for 5\nI0917 16:42:54.611843 1365 log.go:172] (0x29384d0) Data frame received for 1\nI0917 16:42:54.611969 1365 log.go:172] (0x2cc2070) (3) Data frame handling\nI0917 16:42:54.612290 1365 log.go:172] (0x2cc22a0) (5) Data frame handling\nI0917 16:42:54.612492 1365 log.go:172] (0x2938540) (1) Data frame handling\nI0917 16:42:54.613127 1365 log.go:172] (0x2cc22a0) (5) Data frame sent\nI0917 16:42:54.613415 1365 log.go:172] (0x2938540) (1) Data frame sent\nI0917 16:42:54.613814 1365 log.go:172] (0x29384d0) Data frame received for 5\n+ nc -zv -t -w 2 10.102.20.186 80\nConnection to 10.102.20.186 80 port [tcp/http] succeeded!\nI0917 16:42:54.613993 1365 log.go:172] (0x2cc22a0) (5) Data frame handling\nI0917 16:42:54.616891 1365 log.go:172] (0x29384d0) (0x2938540) Stream removed, broadcasting: 1\nI0917 16:42:54.617305 1365 log.go:172] (0x29384d0) Go away received\nI0917 16:42:54.620817 1365 log.go:172] (0x29384d0) (0x2938540) Stream removed, broadcasting: 1\nI0917 16:42:54.621122 1365 log.go:172] (0x29384d0) (0x2cc2070) Stream removed, broadcasting: 3\nI0917 16:42:54.621366 1365 log.go:172] (0x29384d0) (0x2cc22a0) Stream removed, broadcasting: 5\n" Sep 17 16:42:54.631: INFO: stdout: "" Sep 17 16:42:54.631: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:42:54.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2093" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.050 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":41,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:42:54.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Sep 17 16:42:54.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9429' Sep 17 16:42:56.299: INFO: stderr: "" Sep 17 16:42:56.299: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 16:42:56.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9429' Sep 17 16:42:57.404: INFO: stderr: "" Sep 17 16:42:57.404: INFO: stdout: "update-demo-nautilus-fhndb update-demo-nautilus-j5bpf " Sep 17 16:42:57.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fhndb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9429' Sep 17 16:42:58.503: INFO: stderr: "" Sep 17 16:42:58.503: INFO: stdout: "" Sep 17 16:42:58.503: INFO: update-demo-nautilus-fhndb is created but not running Sep 17 16:43:03.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9429' Sep 17 16:43:04.652: INFO: stderr: "" Sep 17 16:43:04.652: INFO: stdout: "update-demo-nautilus-fhndb update-demo-nautilus-j5bpf " Sep 17 16:43:04.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fhndb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9429' Sep 17 16:43:05.776: INFO: stderr: "" Sep 17 16:43:05.776: INFO: stdout: "true" Sep 17 16:43:05.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fhndb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9429' Sep 17 16:43:06.928: INFO: stderr: "" Sep 17 16:43:06.928: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 16:43:06.929: INFO: validating pod update-demo-nautilus-fhndb Sep 17 16:43:06.938: INFO: got data: { "image": "nautilus.jpg" } Sep 17 16:43:06.938: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 16:43:06.939: INFO: update-demo-nautilus-fhndb is verified up and running Sep 17 16:43:06.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5bpf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9429' Sep 17 16:43:08.086: INFO: stderr: "" Sep 17 16:43:08.086: INFO: stdout: "true" Sep 17 16:43:08.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5bpf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9429' Sep 17 16:43:09.207: INFO: stderr: "" Sep 17 16:43:09.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 16:43:09.207: INFO: validating pod update-demo-nautilus-j5bpf Sep 17 16:43:09.213: INFO: got data: { "image": "nautilus.jpg" } Sep 17 16:43:09.213: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 16:43:09.213: INFO: update-demo-nautilus-j5bpf is verified up and running STEP: using delete to clean up resources Sep 17 16:43:09.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9429' Sep 17 16:43:10.350: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 16:43:10.350: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 17 16:43:10.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9429' Sep 17 16:43:11.647: INFO: stderr: "No resources found in kubectl-9429 namespace.\n" Sep 17 16:43:11.648: INFO: stdout: "" Sep 17 16:43:11.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9429 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 17 16:43:12.779: INFO: stderr: "" Sep 17 16:43:12.779: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:12.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9429" for this suite. • [SLOW TEST:18.112 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":42,"skipped":688,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:12.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-2ee352c9-5fc3-4a1b-8d92-ac1025bec170 STEP: Creating a pod to test consume secrets Sep 17 16:43:12.897: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f" in namespace "projected-3778" to be "success or failure" Sep 17 16:43:12.913: INFO: Pod "pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.637251ms Sep 17 16:43:14.921: INFO: Pod "pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023809593s Sep 17 16:43:16.928: INFO: Pod "pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030986209s STEP: Saw pod success Sep 17 16:43:16.928: INFO: Pod "pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f" satisfied condition "success or failure" Sep 17 16:43:16.933: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f container projected-secret-volume-test: STEP: delete the pod Sep 17 16:43:16.956: INFO: Waiting for pod pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f to disappear Sep 17 16:43:16.960: INFO: Pod pod-projected-secrets-51c24f15-72b8-4a77-aa54-cf1b84ab057f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:16.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3778" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:16.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4316e3c1-8ef7-44c8-b7e1-ba1e7396675e STEP: Creating a pod to test consume secrets Sep 17 16:43:17.071: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6" in namespace "projected-1010" to be "success or failure" Sep 17 16:43:17.087: INFO: Pod "pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.942122ms Sep 17 16:43:19.095: INFO: Pod "pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023966974s Sep 17 16:43:21.102: INFO: Pod "pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030848768s STEP: Saw pod success Sep 17 16:43:21.103: INFO: Pod "pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6" satisfied condition "success or failure" Sep 17 16:43:21.108: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6 container projected-secret-volume-test: STEP: delete the pod Sep 17 16:43:21.169: INFO: Waiting for pod pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6 to disappear Sep 17 16:43:21.180: INFO: Pod pod-projected-secrets-a8692011-b4d0-492e-9245-a631540cdca6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:21.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1010" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:21.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:25.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4472" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":736,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:25.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:43:25.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae" in namespace "downward-api-7222" to be "success or failure" Sep 17 16:43:25.392: INFO: Pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae": Phase="Pending", Reason="", readiness=false. Elapsed: 7.131762ms Sep 17 16:43:27.399: INFO: Pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014310733s Sep 17 16:43:29.406: INFO: Pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae": Phase="Running", Reason="", readiness=true. Elapsed: 4.020630234s Sep 17 16:43:31.411: INFO: Pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0255592s STEP: Saw pod success Sep 17 16:43:31.411: INFO: Pod "downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae" satisfied condition "success or failure" Sep 17 16:43:31.414: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae container client-container: STEP: delete the pod Sep 17 16:43:31.474: INFO: Waiting for pod downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae to disappear Sep 17 16:43:31.478: INFO: Pod downwardapi-volume-515cc9dd-c5c2-4491-8950-0d14c7e50bae no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:31.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7222" for this suite. • [SLOW TEST:6.197 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":740,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:31.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:43:55.577: INFO: Container started at 2020-09-17 16:43:33 +0000 UTC, pod became ready at 2020-09-17 16:43:54 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:55.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1828" for this suite. • [SLOW TEST:24.101 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:55.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 17 16:43:55.686: INFO: Waiting up to 5m0s for pod "pod-fca8f9e3-e72a-46c9-86a2-200ed7943758" in namespace "emptydir-377" to be "success or failure" Sep 17 16:43:55.691: INFO: Pod "pod-fca8f9e3-e72a-46c9-86a2-200ed7943758": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267078ms Sep 17 16:43:57.698: INFO: Pod "pod-fca8f9e3-e72a-46c9-86a2-200ed7943758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012242362s Sep 17 16:43:59.705: INFO: Pod "pod-fca8f9e3-e72a-46c9-86a2-200ed7943758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019254374s STEP: Saw pod success Sep 17 16:43:59.705: INFO: Pod "pod-fca8f9e3-e72a-46c9-86a2-200ed7943758" satisfied condition "success or failure" Sep 17 16:43:59.710: INFO: Trying to get logs from node jerma-worker pod pod-fca8f9e3-e72a-46c9-86a2-200ed7943758 container test-container: STEP: delete the pod Sep 17 16:43:59.759: INFO: Waiting for pod pod-fca8f9e3-e72a-46c9-86a2-200ed7943758 to disappear Sep 17 16:43:59.784: INFO: Pod pod-fca8f9e3-e72a-46c9-86a2-200ed7943758 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:43:59.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-377" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:43:59.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:44:11.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:44:13.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957851, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957851, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957851, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735957851, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:44:16.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:44:16.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5895" for this suite. STEP: Destroying namespace "webhook-5895-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.257 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":49,"skipped":856,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:44:17.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-efcf216f-f88c-48d6-8617-9d40b44b2ac1 in namespace container-probe-9557 Sep 17 16:44:21.202: INFO: Started pod test-webserver-efcf216f-f88c-48d6-8617-9d40b44b2ac1 in namespace container-probe-9557 STEP: checking the pod's current state and verifying that restartCount is present Sep 17 16:44:21.208: INFO: Initial restart count of pod test-webserver-efcf216f-f88c-48d6-8617-9d40b44b2ac1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:22.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9557" for this suite. • [SLOW TEST:245.620 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:22.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0917 16:48:24.071411 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 17 16:48:24.072: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:24.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3791" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":51,"skipped":914,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:24.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:24.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7419" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":52,"skipped":933,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:24.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Sep 17 16:48:24.448: INFO: Waiting up to 5m0s for pod "var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797" in namespace "var-expansion-6821" to be "success or failure" Sep 17 16:48:24.487: INFO: Pod "var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797": Phase="Pending", Reason="", readiness=false. Elapsed: 37.80357ms Sep 17 16:48:26.500: INFO: Pod "var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051522551s Sep 17 16:48:28.506: INFO: Pod "var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057696291s STEP: Saw pod success Sep 17 16:48:28.507: INFO: Pod "var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797" satisfied condition "success or failure" Sep 17 16:48:28.511: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797 container dapi-container: STEP: delete the pod Sep 17 16:48:28.566: INFO: Waiting for pod var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797 to disappear Sep 17 16:48:28.577: INFO: Pod var-expansion-abc26219-2055-42f9-b1dc-ca3c46ff7797 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:28.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6821" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:28.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Sep 17 16:48:28.677: INFO: Waiting up to 5m0s for pod "var-expansion-4592d2d9-c023-48a1-ae19-394c46132545" in namespace "var-expansion-4482" to be "success or failure" Sep 17 16:48:28.681: INFO: Pod "var-expansion-4592d2d9-c023-48a1-ae19-394c46132545": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680478ms Sep 17 16:48:30.688: INFO: Pod "var-expansion-4592d2d9-c023-48a1-ae19-394c46132545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010918482s Sep 17 16:48:32.694: INFO: Pod "var-expansion-4592d2d9-c023-48a1-ae19-394c46132545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017344545s STEP: Saw pod success Sep 17 16:48:32.695: INFO: Pod "var-expansion-4592d2d9-c023-48a1-ae19-394c46132545" satisfied condition "success or failure" Sep 17 16:48:32.699: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-4592d2d9-c023-48a1-ae19-394c46132545 container dapi-container: STEP: delete the pod Sep 17 16:48:32.778: INFO: Waiting for pod var-expansion-4592d2d9-c023-48a1-ae19-394c46132545 to disappear Sep 17 16:48:32.792: INFO: Pod var-expansion-4592d2d9-c023-48a1-ae19-394c46132545 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:32.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4482" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1010,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:32.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Sep 17 16:48:37.443: INFO: Successfully updated pod "labelsupdate2a0d205e-d394-4646-b580-be4326ccb2ed" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:39.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9052" for this suite. • [SLOW TEST:6.689 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1013,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:39.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:48:46.765: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:48:48.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:48:50.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958126, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:48:53.829: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:48:53.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7362" for this suite. STEP: Destroying namespace "webhook-7362-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.169 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":56,"skipped":1018,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:48:54.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0917 16:49:25.118930 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 17 16:49:25.119: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:49:25.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9195" for this suite. • [SLOW TEST:30.470 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":57,"skipped":1026,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:49:25.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 17 16:49:25.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-579' Sep 17 16:49:26.374: INFO: stderr: "" Sep 17 16:49:26.375: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 17 16:49:31.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-579 -o json' Sep 17 16:49:35.454: INFO: stderr: "" Sep 17 16:49:35.454: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-17T16:49:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-579\",\n \"resourceVersion\": \"1067640\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-579/pods/e2e-test-httpd-pod\",\n \"uid\": \"46e8483c-e316-4f1d-a5ed-fd43fc17c1d3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-n5pl6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-n5pl6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-n5pl6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-17T16:49:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-17T16:49:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-17T16:49:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-17T16:49:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a06fa957b9a61e5e2a469fbffcba448f18d34a66ae1e43ea226524dd8ab8c1e7\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-17T16:49:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.206\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.206\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-17T16:49:26Z\"\n }\n}\n" STEP: replace the image in the pod Sep 17 16:49:35.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-579' Sep 17 16:49:36.880: INFO: stderr: "" Sep 17 16:49:36.880: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801 Sep 17 16:49:36.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-579' Sep 17 16:49:40.268: INFO: stderr: "" Sep 17 16:49:40.268: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:49:40.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-579" for this suite. • [SLOW TEST:15.141 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792 should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":58,"skipped":1044,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:49:40.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Sep 17 16:49:40.447: INFO: Waiting up to 5m0s for pod "downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91" in namespace "downward-api-2387" to be "success or failure" Sep 17 16:49:40.480: INFO: Pod "downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91": Phase="Pending", Reason="", readiness=false. Elapsed: 32.595694ms Sep 17 16:49:42.489: INFO: Pod "downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041817072s Sep 17 16:49:44.497: INFO: Pod "downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049605403s STEP: Saw pod success Sep 17 16:49:44.497: INFO: Pod "downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91" satisfied condition "success or failure" Sep 17 16:49:44.502: INFO: Trying to get logs from node jerma-worker2 pod downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91 container dapi-container: STEP: delete the pod Sep 17 16:49:44.604: INFO: Waiting for pod downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91 to disappear Sep 17 16:49:44.712: INFO: Pod downward-api-8ffd48e9-a829-414a-a5aa-04721d1aeb91 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:49:44.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2387" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1052,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:49:44.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:49:48.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3674" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":60,"skipped":1063,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:49:48.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:49:59.413: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:50:01.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958199, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958199, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958199, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958199, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:50:04.514: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:04.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8613" for this suite. STEP: Destroying namespace "webhook-8613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.747 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":61,"skipped":1071,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:04.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7684/secret-test-227409c8-6fe9-42a1-87b6-04a252280f94 STEP: Creating a pod to test consume secrets Sep 17 16:50:04.801: INFO: Waiting up to 5m0s for pod "pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f" in namespace "secrets-7684" to be "success or failure" Sep 17 16:50:04.821: INFO: Pod "pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.969803ms Sep 17 16:50:06.827: INFO: Pod "pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025880958s Sep 17 16:50:08.832: INFO: Pod "pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031482537s STEP: Saw pod success Sep 17 16:50:08.833: INFO: Pod "pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f" satisfied condition "success or failure" Sep 17 16:50:08.837: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f container env-test: STEP: delete the pod Sep 17 16:50:08.856: INFO: Waiting for pod pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f to disappear Sep 17 16:50:08.994: INFO: Pod pod-configmaps-cca8f0ba-6fba-4c58-8a9a-ffbfbae08b3f no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:08.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7684" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1075,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:09.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d5a3aab8-dd5d-4c2a-92ed-f5459c1abe1e STEP: Creating a pod to test consume configMaps Sep 17 16:50:09.178: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578" in namespace "projected-476" to be "success or failure" Sep 17 16:50:09.184: INFO: Pod "pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569401ms Sep 17 16:50:11.191: INFO: Pod "pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013519747s Sep 17 16:50:13.199: INFO: Pod "pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021225407s STEP: Saw pod success Sep 17 16:50:13.199: INFO: Pod "pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578" satisfied condition "success or failure" Sep 17 16:50:13.205: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578 container projected-configmap-volume-test: STEP: delete the pod Sep 17 16:50:13.227: INFO: Waiting for pod pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578 to disappear Sep 17 16:50:13.238: INFO: Pod pod-projected-configmaps-b49619e2-b30f-4fe8-86fa-daa948877578 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:13.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-476" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1076,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:13.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:20.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-481" for this suite. • [SLOW TEST:7.147 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":64,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:20.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Sep 17 16:50:20.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Sep 17 16:50:21.603: INFO: stderr: "" Sep 17 16:50:21.603: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:21.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-81" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":65,"skipped":1142,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:21.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 16:50:21.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589" in namespace "projected-6412" to be "success or failure" Sep 17 16:50:21.731: INFO: Pod "downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589": Phase="Pending", Reason="", readiness=false. Elapsed: 45.417181ms Sep 17 16:50:23.738: INFO: Pod "downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052509212s Sep 17 16:50:26.547: INFO: Pod "downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.861094462s STEP: Saw pod success Sep 17 16:50:26.547: INFO: Pod "downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589" satisfied condition "success or failure" Sep 17 16:50:26.554: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589 container client-container: STEP: delete the pod Sep 17 16:50:26.744: INFO: Waiting for pod downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589 to disappear Sep 17 16:50:26.779: INFO: Pod downwardapi-volume-cfa1cf92-e814-4eb5-8353-de60e5cd7589 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:26.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6412" for this suite. • [SLOW TEST:5.189 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:26.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 17 16:50:26.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1355' Sep 17 16:50:28.096: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 17 16:50:28.096: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Sep 17 16:50:28.105: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-xsv5c] Sep 17 16:50:28.106: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-xsv5c" in namespace "kubectl-1355" to be "running and ready" Sep 17 16:50:28.109: INFO: Pod "e2e-test-httpd-rc-xsv5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258624ms Sep 17 16:50:30.116: INFO: Pod "e2e-test-httpd-rc-xsv5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009890127s Sep 17 16:50:32.123: INFO: Pod "e2e-test-httpd-rc-xsv5c": Phase="Running", Reason="", readiness=true. Elapsed: 4.016718069s Sep 17 16:50:32.123: INFO: Pod "e2e-test-httpd-rc-xsv5c" satisfied condition "running and ready" Sep 17 16:50:32.124: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-xsv5c] Sep 17 16:50:32.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1355' Sep 17 16:50:33.303: INFO: stderr: "" Sep 17 16:50:33.303: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message\n[Thu Sep 17 16:50:30.481265 2020] [mpm_event:notice] [pid 1:tid 139859755830120] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Sep 17 16:50:30.481314 2020] [core:notice] [pid 1:tid 139859755830120] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Sep 17 16:50:33.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1355' Sep 17 16:50:34.429: INFO: stderr: "" Sep 17 16:50:34.429: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:34.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1355" for this suite. • [SLOW TEST:7.638 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":67,"skipped":1180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:34.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 17 16:50:38.655: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:38.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8039" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:38.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 17 16:50:38.888: INFO: Waiting up to 5m0s for pod "pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305" in namespace "emptydir-7000" to be "success or failure" Sep 17 16:50:38.911: INFO: Pod "pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305": Phase="Pending", Reason="", readiness=false. Elapsed: 22.543094ms Sep 17 16:50:40.958: INFO: Pod "pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069104121s Sep 17 16:50:42.965: INFO: Pod "pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076524017s STEP: Saw pod success Sep 17 16:50:42.965: INFO: Pod "pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305" satisfied condition "success or failure" Sep 17 16:50:42.970: INFO: Trying to get logs from node jerma-worker2 pod pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305 container test-container: STEP: delete the pod Sep 17 16:50:43.012: INFO: Waiting for pod pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305 to disappear Sep 17 16:50:43.030: INFO: Pod pod-03ba0087-8d0c-4461-9c58-9a7b9fce1305 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:43.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7000" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1287,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:43.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 17 16:50:43.130: INFO: Waiting up to 5m0s for pod "pod-355a2de8-1896-4013-bf26-fd1eec567141" in namespace "emptydir-7018" to be "success or failure" Sep 17 16:50:43.164: INFO: Pod "pod-355a2de8-1896-4013-bf26-fd1eec567141": Phase="Pending", Reason="", readiness=false. Elapsed: 33.422562ms Sep 17 16:50:45.511: INFO: Pod "pod-355a2de8-1896-4013-bf26-fd1eec567141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380861601s Sep 17 16:50:47.517: INFO: Pod "pod-355a2de8-1896-4013-bf26-fd1eec567141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.387028227s STEP: Saw pod success Sep 17 16:50:47.517: INFO: Pod "pod-355a2de8-1896-4013-bf26-fd1eec567141" satisfied condition "success or failure" Sep 17 16:50:47.540: INFO: Trying to get logs from node jerma-worker2 pod pod-355a2de8-1896-4013-bf26-fd1eec567141 container test-container: STEP: delete the pod Sep 17 16:50:47.888: INFO: Waiting for pod pod-355a2de8-1896-4013-bf26-fd1eec567141 to disappear Sep 17 16:50:47.893: INFO: Pod pod-355a2de8-1896-4013-bf26-fd1eec567141 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:50:47.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7018" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1296,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:50:47.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:51:03.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2475" for this suite. • [SLOW TEST:16.091 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":71,"skipped":1313,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:51:03.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:51:08.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5551" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:51:08.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 17 16:51:15.201: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 17 16:51:17.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958275, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958275, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958275, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958275, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:51:20.269: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:51:20.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:51:21.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9481" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.571 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":73,"skipped":1337,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:51:21.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:51:38.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8066" for this suite. • [SLOW TEST:17.261 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":74,"skipped":1342,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:51:38.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50 [It] should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Sep 17 16:51:43.121: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Sep 17 16:51:59.252: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:51:59.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9888" for this suite. • [SLOW TEST:20.297 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":75,"skipped":1362,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:51:59.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:51:59.351: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-abb93e97-ed08-4b4d-84fd-e772b404c191" in namespace "security-context-test-9302" to be "success or failure" Sep 17 16:51:59.415: INFO: Pod "alpine-nnp-false-abb93e97-ed08-4b4d-84fd-e772b404c191": Phase="Pending", Reason="", readiness=false. Elapsed: 63.165421ms Sep 17 16:52:01.422: INFO: Pod "alpine-nnp-false-abb93e97-ed08-4b4d-84fd-e772b404c191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070382682s Sep 17 16:52:03.428: INFO: Pod "alpine-nnp-false-abb93e97-ed08-4b4d-84fd-e772b404c191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076130401s Sep 17 16:52:03.428: INFO: Pod "alpine-nnp-false-abb93e97-ed08-4b4d-84fd-e772b404c191" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:52:03.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9302" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1363,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:52:03.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:52:03.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3454" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":77,"skipped":1365,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:52:03.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Sep 17 16:52:03.668: INFO: Waiting up to 5m0s for pod "downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9" in namespace "downward-api-6631" to be "success or failure" Sep 17 16:52:03.672: INFO: Pod "downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625192ms Sep 17 16:52:05.678: INFO: Pod "downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010088322s Sep 17 16:52:07.685: INFO: Pod "downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016799679s STEP: Saw pod success Sep 17 16:52:07.685: INFO: Pod "downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9" satisfied condition "success or failure" Sep 17 16:52:07.690: INFO: Trying to get logs from node jerma-worker pod downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9 container dapi-container: STEP: delete the pod Sep 17 16:52:07.740: INFO: Waiting for pod downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9 to disappear Sep 17 16:52:07.774: INFO: Pod downward-api-1fd33c72-37fa-4053-ad33-42bfa41ae5f9 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:52:07.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6631" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1366,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:52:07.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 17 16:52:07.856: INFO: Waiting up to 5m0s for pod "pod-8d0bf708-402c-4442-9a4e-a5a33976a398" in namespace "emptydir-7338" to be "success or failure" Sep 17 16:52:07.860: INFO: Pod "pod-8d0bf708-402c-4442-9a4e-a5a33976a398": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204254ms Sep 17 16:52:09.973: INFO: Pod "pod-8d0bf708-402c-4442-9a4e-a5a33976a398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116802031s Sep 17 16:52:11.980: INFO: Pod "pod-8d0bf708-402c-4442-9a4e-a5a33976a398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123038234s STEP: Saw pod success Sep 17 16:52:11.980: INFO: Pod "pod-8d0bf708-402c-4442-9a4e-a5a33976a398" satisfied condition "success or failure" Sep 17 16:52:11.984: INFO: Trying to get logs from node jerma-worker2 pod pod-8d0bf708-402c-4442-9a4e-a5a33976a398 container test-container: STEP: delete the pod Sep 17 16:52:12.043: INFO: Waiting for pod pod-8d0bf708-402c-4442-9a4e-a5a33976a398 to disappear Sep 17 16:52:12.056: INFO: Pod pod-8d0bf708-402c-4442-9a4e-a5a33976a398 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:52:12.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7338" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1368,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:52:12.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6fbc2c6c-f359-4280-8322-2279d9fcf1f3 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6fbc2c6c-f359-4280-8322-2279d9fcf1f3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:53:44.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8467" for this suite. • [SLOW TEST:92.724 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1378,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:53:44.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:53:44.930: INFO: Create a RollingUpdate DaemonSet Sep 17 16:53:44.941: INFO: Check that daemon pods launch on every node of the cluster Sep 17 16:53:44.961: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:44.972: INFO: Number of nodes with available pods: 0 Sep 17 16:53:44.973: INFO: Node jerma-worker is running more than one daemon pod Sep 17 16:53:45.986: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:45.992: INFO: Number of nodes with available pods: 0 Sep 17 16:53:45.992: INFO: Node jerma-worker is running more than one daemon pod Sep 17 16:53:47.101: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:47.107: INFO: Number of nodes with available pods: 0 Sep 17 16:53:47.107: INFO: Node jerma-worker is running more than one daemon pod Sep 17 16:53:47.980: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:47.986: INFO: Number of nodes with available pods: 0 Sep 17 16:53:47.986: INFO: Node jerma-worker is running more than one daemon pod Sep 17 16:53:48.982: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:48.998: INFO: Number of nodes with available pods: 1 Sep 17 16:53:48.998: INFO: Node jerma-worker is running more than one daemon pod Sep 17 16:53:49.990: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:49.996: INFO: Number of nodes with available pods: 2 Sep 17 16:53:49.996: INFO: Number of running nodes: 2, number of available pods: 2 Sep 17 16:53:49.996: INFO: Update the DaemonSet to trigger a rollout Sep 17 16:53:50.004: INFO: Updating DaemonSet daemon-set Sep 17 16:53:58.048: INFO: Roll back the DaemonSet before rollout is complete Sep 17 16:53:58.056: INFO: Updating DaemonSet daemon-set Sep 17 16:53:58.056: INFO: Make sure DaemonSet rollback is complete Sep 17 16:53:58.067: INFO: Wrong image for pod: daemon-set-d8866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 17 16:53:58.067: INFO: Pod daemon-set-d8866 is not available Sep 17 16:53:58.162: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:53:59.169: INFO: Wrong image for pod: daemon-set-d8866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 17 16:53:59.169: INFO: Pod daemon-set-d8866 is not available Sep 17 16:53:59.174: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:54:00.279: INFO: Wrong image for pod: daemon-set-d8866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 17 16:54:00.279: INFO: Pod daemon-set-d8866 is not available Sep 17 16:54:00.295: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:54:01.187: INFO: Pod daemon-set-ps9f5 is not available Sep 17 16:54:01.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 16:54:02.171: INFO: Pod daemon-set-ps9f5 is not available Sep 17 16:54:02.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6957, will wait for the garbage collector to delete the pods Sep 17 16:54:02.256: INFO: Deleting DaemonSet.extensions daemon-set took: 10.349608ms Sep 17 16:54:02.959: INFO: Terminating DaemonSet.extensions daemon-set pods took: 702.404872ms Sep 17 16:54:07.781: INFO: Number of nodes with available pods: 0 Sep 17 16:54:07.781: INFO: Number of running nodes: 0, number of available pods: 0 Sep 17 16:54:07.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6957/daemonsets","resourceVersion":"1069228"},"items":null} Sep 17 16:54:07.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6957/pods","resourceVersion":"1069228"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:07.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6957" for this suite. • [SLOW TEST:23.022 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":81,"skipped":1385,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:07.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-ebbf5896-65b8-4ef1-9d90-a578eb00af0e STEP: Creating a pod to test consume secrets Sep 17 16:54:07.910: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098" in namespace "projected-932" to be "success or failure" Sep 17 16:54:07.925: INFO: Pod "pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098": Phase="Pending", Reason="", readiness=false. Elapsed: 14.698388ms Sep 17 16:54:09.932: INFO: Pod "pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021621422s Sep 17 16:54:11.938: INFO: Pod "pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028127867s STEP: Saw pod success Sep 17 16:54:11.939: INFO: Pod "pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098" satisfied condition "success or failure" Sep 17 16:54:11.944: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098 container secret-volume-test: STEP: delete the pod Sep 17 16:54:11.981: INFO: Waiting for pod pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098 to disappear Sep 17 16:54:12.015: INFO: Pod pod-projected-secrets-2ec91512-fd4f-447f-aed9-bc1f09f22098 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:12.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-932" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1390,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:12.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:54:12.107: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:12.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1783" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":83,"skipped":1393,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:12.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Sep 17 16:54:17.548: INFO: Successfully updated pod "annotationupdate6a81ed7b-cab4-4374-86a1-17b930742a68" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:19.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4440" for this suite. • [SLOW TEST:6.731 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:19.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f Sep 17 16:54:19.682: INFO: Pod name my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f: Found 0 pods out of 1 Sep 17 16:54:24.694: INFO: Pod name my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f: Found 1 pods out of 1 Sep 17 16:54:24.695: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f" are running Sep 17 16:54:24.722: INFO: Pod "my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f-sj8sq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 16:54:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 16:54:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 16:54:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 16:54:19 +0000 UTC Reason: Message:}]) Sep 17 16:54:24.723: INFO: Trying to dial the pod Sep 17 16:54:29.746: INFO: Controller my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f: Got expected result from replica 1 [my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f-sj8sq]: "my-hostname-basic-70a7bb64-7ff7-42f0-89f0-02292f2f126f-sj8sq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:29.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8635" for this suite. • [SLOW TEST:10.171 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":85,"skipped":1450,"failed":0} [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:29.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 17 16:54:34.413: INFO: Successfully updated pod "pod-update-940679a8-7bed-4dc9-9a29-e6b09e5314f9" STEP: verifying the updated pod is in kubernetes Sep 17 16:54:34.426: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:34.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7265" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1450,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:34.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:38.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2136" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1469,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:38.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 16:54:38.687: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:42.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4610" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1490,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:42.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Sep 17 16:54:42.890: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:54:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8286" for this suite. • [SLOW TEST:5.554 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":89,"skipped":1507,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:54:48.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wsg8 STEP: Creating a pod to test atomic-volume-subpath Sep 17 16:54:48.486: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wsg8" in namespace "subpath-3400" to be "success or failure" Sep 17 16:54:48.505: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.203193ms Sep 17 16:54:50.512: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025469084s Sep 17 16:54:52.519: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 4.032697484s Sep 17 16:54:54.526: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 6.039577571s Sep 17 16:54:56.533: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.046812215s Sep 17 16:54:58.541: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.054015216s Sep 17 16:55:00.547: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.060019707s Sep 17 16:55:02.554: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.067038065s Sep 17 16:55:04.560: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.073531091s Sep 17 16:55:06.566: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.079960607s Sep 17 16:55:08.576: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.089822314s Sep 17 16:55:10.583: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.09645255s Sep 17 16:55:12.591: INFO: Pod "pod-subpath-test-configmap-wsg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104035978s STEP: Saw pod success Sep 17 16:55:12.591: INFO: Pod "pod-subpath-test-configmap-wsg8" satisfied condition "success or failure" Sep 17 16:55:12.595: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-wsg8 container test-container-subpath-configmap-wsg8: STEP: delete the pod Sep 17 16:55:12.817: INFO: Waiting for pod pod-subpath-test-configmap-wsg8 to disappear Sep 17 16:55:12.849: INFO: Pod pod-subpath-test-configmap-wsg8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-wsg8 Sep 17 16:55:12.850: INFO: Deleting pod "pod-subpath-test-configmap-wsg8" in namespace "subpath-3400" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:55:12.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3400" for this suite. • [SLOW TEST:24.513 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":90,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:55:12.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Sep 17 16:55:17.516: INFO: Successfully updated pod "annotationupdate5544a1bf-f779-49ff-93ce-d92922367d50" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:55:21.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6448" for this suite. • [SLOW TEST:8.664 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1532,"failed":0} [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:55:21.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 16:55:32.826: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.830: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.834: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.838: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.896: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.899: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.903: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.907: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:32.915: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:55:37.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.927: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.932: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.936: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.950: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.954: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.957: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.961: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:37.967: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:55:42.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.928: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.932: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.951: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.955: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.959: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.967: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:42.975: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:55:47.923: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.928: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.933: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.951: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.955: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.959: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.964: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:47.973: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:55:52.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.927: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.932: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.950: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.954: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.958: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:52.974: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:55:57.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.927: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.932: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.936: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.951: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.955: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.960: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.965: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local from pod dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2: the server could not find the requested resource (get pods dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2) Sep 17 16:55:57.973: INFO: Lookups using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2234.svc.cluster.local jessie_udp@dns-test-service-2.dns-2234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2234.svc.cluster.local] Sep 17 16:56:02.964: INFO: DNS probes using dns-2234/dns-test-c8eb2e3b-db9f-4089-8636-5405504e67b2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:03.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2234" for this suite. • [SLOW TEST:42.151 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":92,"skipped":1532,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:03.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-89c68c2a-092a-41d6-89ff-aec60ee4ffa1 STEP: Creating a pod to test consume configMaps Sep 17 16:56:03.875: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade" in namespace "projected-3343" to be "success or failure" Sep 17 16:56:03.909: INFO: Pod "pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade": Phase="Pending", Reason="", readiness=false. Elapsed: 34.055324ms Sep 17 16:56:06.029: INFO: Pod "pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153808369s Sep 17 16:56:08.036: INFO: Pod "pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160950076s STEP: Saw pod success Sep 17 16:56:08.037: INFO: Pod "pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade" satisfied condition "success or failure" Sep 17 16:56:08.041: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade container projected-configmap-volume-test: STEP: delete the pod Sep 17 16:56:08.152: INFO: Waiting for pod pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade to disappear Sep 17 16:56:08.167: INFO: Pod pod-projected-configmaps-29901343-47b1-4fb3-b3ef-c47cc358eade no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:08.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3343" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1537,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:08.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Sep 17 16:56:08.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6310 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Sep 17 16:56:13.342: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0917 16:56:13.206281 1837 log.go:172] (0x28e8540) (0x28e8620) Create stream\nI0917 16:56:13.208178 1837 log.go:172] (0x28e8540) (0x28e8620) Stream added, broadcasting: 1\nI0917 16:56:13.215647 1837 log.go:172] (0x28e8540) Reply frame received for 1\nI0917 16:56:13.216203 1837 log.go:172] (0x28e8540) (0x26223f0) Create stream\nI0917 16:56:13.216294 1837 log.go:172] (0x28e8540) (0x26223f0) Stream added, broadcasting: 3\nI0917 16:56:13.218171 1837 log.go:172] (0x28e8540) Reply frame received for 3\nI0917 16:56:13.218529 1837 log.go:172] (0x28e8540) (0x2b48070) Create stream\nI0917 16:56:13.218604 1837 log.go:172] (0x28e8540) (0x2b48070) Stream added, broadcasting: 5\nI0917 16:56:13.219713 1837 log.go:172] (0x28e8540) Reply frame received for 5\nI0917 16:56:13.219967 1837 log.go:172] (0x28e8540) (0x28e9490) Create stream\nI0917 16:56:13.220052 1837 log.go:172] (0x28e8540) (0x28e9490) Stream added, broadcasting: 7\nI0917 16:56:13.221177 1837 log.go:172] (0x28e8540) Reply frame received for 7\nI0917 16:56:13.223042 1837 log.go:172] (0x26223f0) (3) Writing data frame\nI0917 16:56:13.223855 1837 log.go:172] (0x26223f0) (3) Writing data frame\nI0917 16:56:13.224630 1837 log.go:172] (0x28e8540) Data frame received for 5\nI0917 16:56:13.224810 1837 log.go:172] (0x2b48070) (5) Data frame handling\nI0917 16:56:13.225087 1837 log.go:172] (0x2b48070) (5) Data frame sent\nI0917 16:56:13.225362 1837 log.go:172] (0x28e8540) Data frame received for 5\nI0917 16:56:13.225423 1837 log.go:172] (0x2b48070) (5) Data frame handling\nI0917 16:56:13.225509 1837 log.go:172] (0x2b48070) (5) Data frame sent\nI0917 16:56:13.274033 1837 log.go:172] (0x28e8540) Data frame received for 7\nI0917 16:56:13.274252 1837 log.go:172] (0x28e8540) Data frame received for 5\nI0917 16:56:13.274477 1837 log.go:172] (0x2b48070) (5) Data frame handling\nI0917 16:56:13.274602 1837 log.go:172] (0x28e9490) (7) Data frame handling\nI0917 16:56:13.274979 1837 log.go:172] (0x28e8540) Data frame received for 1\nI0917 16:56:13.275154 1837 log.go:172] (0x28e8620) (1) Data frame handling\nI0917 16:56:13.275312 1837 log.go:172] (0x28e8620) (1) Data frame sent\nI0917 16:56:13.276298 1837 log.go:172] (0x28e8540) (0x28e8620) Stream removed, broadcasting: 1\nI0917 16:56:13.277845 1837 log.go:172] (0x28e8540) (0x26223f0) Stream removed, broadcasting: 3\nI0917 16:56:13.278582 1837 log.go:172] (0x28e8540) Go away received\nI0917 16:56:13.283798 1837 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0x2b48070), 0x7:(*spdystream.Stream)(0x28e9490)}\nI0917 16:56:13.284331 1837 log.go:172] (0x28e8540) (0x28e8620) Stream removed, broadcasting: 1\nI0917 16:56:13.284720 1837 log.go:172] (0x28e8540) (0x26223f0) Stream removed, broadcasting: 3\nI0917 16:56:13.284802 1837 log.go:172] (0x28e8540) (0x2b48070) Stream removed, broadcasting: 5\nI0917 16:56:13.285189 1837 log.go:172] (0x28e8540) (0x28e9490) Stream removed, broadcasting: 7\n" Sep 17 16:56:13.343: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:15.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6310" for this suite. • [SLOW TEST:7.189 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843 should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":94,"skipped":1545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:15.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:56:28.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:56:33.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:56:35.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958588, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:56:38.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:38.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3697" for this suite. STEP: Destroying namespace "webhook-3697-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":95,"skipped":1588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:38.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 16:56:44.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 16:56:46.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 17 16:56:48.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958604, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 16:56:51.896: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:52.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5625" for this suite. STEP: Destroying namespace "webhook-5625-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.261 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":96,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:52.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Sep 17 16:56:52.260: INFO: Waiting up to 5m0s for pod "client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897" in namespace "containers-8064" to be "success or failure" Sep 17 16:56:52.310: INFO: Pod "client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897": Phase="Pending", Reason="", readiness=false. Elapsed: 49.554168ms Sep 17 16:56:54.472: INFO: Pod "client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211670285s Sep 17 16:56:56.479: INFO: Pod "client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.218762869s STEP: Saw pod success Sep 17 16:56:56.479: INFO: Pod "client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897" satisfied condition "success or failure" Sep 17 16:56:56.504: INFO: Trying to get logs from node jerma-worker pod client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897 container test-container: STEP: delete the pod Sep 17 16:56:56.544: INFO: Waiting for pod client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897 to disappear Sep 17 16:56:56.573: INFO: Pod client-containers-281c4fdd-1fd0-486f-a793-1c4818f52897 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:56:56.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8064" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1627,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:56:56.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cbcb12b9-fed1-4dc8-8c1c-09d7ccc6d572 STEP: Creating a pod to test consume secrets Sep 17 16:56:56.753: INFO: Waiting up to 5m0s for pod "pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515" in namespace "secrets-3609" to be "success or failure" Sep 17 16:56:56.767: INFO: Pod "pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515": Phase="Pending", Reason="", readiness=false. Elapsed: 13.432568ms Sep 17 16:56:58.773: INFO: Pod "pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019777363s Sep 17 16:57:00.779: INFO: Pod "pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026010341s STEP: Saw pod success Sep 17 16:57:00.779: INFO: Pod "pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515" satisfied condition "success or failure" Sep 17 16:57:00.784: INFO: Trying to get logs from node jerma-worker pod pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515 container secret-volume-test: STEP: delete the pod Sep 17 16:57:00.918: INFO: Waiting for pod pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515 to disappear Sep 17 16:57:00.962: INFO: Pod pod-secrets-53165f7e-2f86-4cce-9aca-870df0ff3515 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:57:00.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3609" for this suite. STEP: Destroying namespace "secret-namespace-3914" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1630,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:57:00.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-52b69ce7-5fed-4d92-98ab-48ecd1ce4d05 STEP: Creating a pod to test consume secrets Sep 17 16:57:01.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621" in namespace "projected-9008" to be "success or failure" Sep 17 16:57:01.089: INFO: Pod "pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621": Phase="Pending", Reason="", readiness=false. Elapsed: 31.535326ms Sep 17 16:57:03.095: INFO: Pod "pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038054977s Sep 17 16:57:05.102: INFO: Pod "pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04471027s STEP: Saw pod success Sep 17 16:57:05.102: INFO: Pod "pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621" satisfied condition "success or failure" Sep 17 16:57:05.107: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621 container projected-secret-volume-test: STEP: delete the pod Sep 17 16:57:05.126: INFO: Waiting for pod pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621 to disappear Sep 17 16:57:05.151: INFO: Pod pod-projected-secrets-7b47238c-18af-4f79-9463-22b502e9a621 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:57:05.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9008" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1637,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:57:05.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 17 16:57:09.812: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7e7fbefb-ebb3-4a97-afd5-a20bf0567603" Sep 17 16:57:09.813: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7e7fbefb-ebb3-4a97-afd5-a20bf0567603" in namespace "pods-9436" to be "terminated due to deadline exceeded" Sep 17 16:57:09.827: INFO: Pod "pod-update-activedeadlineseconds-7e7fbefb-ebb3-4a97-afd5-a20bf0567603": Phase="Running", Reason="", readiness=true. Elapsed: 14.022751ms Sep 17 16:57:11.833: INFO: Pod "pod-update-activedeadlineseconds-7e7fbefb-ebb3-4a97-afd5-a20bf0567603": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020591144s Sep 17 16:57:11.834: INFO: Pod "pod-update-activedeadlineseconds-7e7fbefb-ebb3-4a97-afd5-a20bf0567603" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:57:11.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9436" for this suite. • [SLOW TEST:6.682 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1647,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:57:11.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0917 16:57:21.974777 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 17 16:57:21.975: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:57:21.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9707" for this suite. • [SLOW TEST:10.134 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":101,"skipped":1669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:57:21.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 17 16:57:22.040: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 17 16:57:48.250: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 16:57:48.250: INFO: >>> kubeConfig: /root/.kube/config I0917 16:57:48.364423 7 log.go:172] (0x8fc7030) (0x8fc73b0) Create stream I0917 16:57:48.364581 7 log.go:172] (0x8fc7030) (0x8fc73b0) Stream added, broadcasting: 1 I0917 16:57:48.370390 7 log.go:172] (0x8fc7030) Reply frame received for 1 I0917 16:57:48.370685 7 log.go:172] (0x8fc7030) (0xa3d2380) Create stream I0917 16:57:48.370837 7 log.go:172] (0x8fc7030) (0xa3d2380) Stream added, broadcasting: 3 I0917 16:57:48.372889 7 log.go:172] (0x8fc7030) Reply frame received for 3 I0917 16:57:48.373031 7 log.go:172] (0x8fc7030) (0x9208310) Create stream I0917 16:57:48.373099 7 log.go:172] (0x8fc7030) (0x9208310) Stream added, broadcasting: 5 I0917 16:57:48.374469 7 log.go:172] (0x8fc7030) Reply frame received for 5 I0917 16:57:49.461448 7 log.go:172] (0x8fc7030) Data frame received for 3 I0917 16:57:49.461757 7 log.go:172] (0xa3d2380) (3) Data frame handling I0917 16:57:49.462132 7 log.go:172] (0xa3d2380) (3) Data frame sent I0917 16:57:49.462313 7 log.go:172] (0x8fc7030) Data frame received for 3 I0917 16:57:49.462494 7 log.go:172] (0x8fc7030) Data frame received for 5 I0917 16:57:49.462743 7 log.go:172] (0x9208310) (5) Data frame handling I0917 16:57:49.462932 7 log.go:172] (0xa3d2380) (3) Data frame handling I0917 16:57:49.464431 7 log.go:172] (0x8fc7030) Data frame received for 1 I0917 16:57:49.464624 7 log.go:172] (0x8fc73b0) (1) Data frame handling I0917 16:57:49.464801 7 log.go:172] (0x8fc73b0) (1) Data frame sent I0917 16:57:49.464961 7 log.go:172] (0x8fc7030) (0x8fc73b0) Stream removed, broadcasting: 1 I0917 16:57:49.465149 7 log.go:172] (0x8fc7030) Go away received I0917 16:57:49.465685 7 log.go:172] (0x8fc7030) (0x8fc73b0) Stream removed, broadcasting: 1 I0917 16:57:49.465841 7 log.go:172] (0x8fc7030) (0xa3d2380) Stream removed, broadcasting: 3 I0917 16:57:49.465992 7 log.go:172] (0x8fc7030) (0x9208310) Stream removed, broadcasting: 5 Sep 17 16:57:49.466: INFO: Found all expected endpoints: [netserver-0] Sep 17 16:57:49.473: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.231 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 16:57:49.473: INFO: >>> kubeConfig: /root/.kube/config I0917 16:57:49.578195 7 log.go:172] (0x8c3ff80) (0x780c930) Create stream I0917 16:57:49.578327 7 log.go:172] (0x8c3ff80) (0x780c930) Stream added, broadcasting: 1 I0917 16:57:49.582881 7 log.go:172] (0x8c3ff80) Reply frame received for 1 I0917 16:57:49.583170 7 log.go:172] (0x8c3ff80) (0x7f19ab0) Create stream I0917 16:57:49.583296 7 log.go:172] (0x8c3ff80) (0x7f19ab0) Stream added, broadcasting: 3 I0917 16:57:49.585135 7 log.go:172] (0x8c3ff80) Reply frame received for 3 I0917 16:57:49.585259 7 log.go:172] (0x8c3ff80) (0x6f50620) Create stream I0917 16:57:49.585325 7 log.go:172] (0x8c3ff80) (0x6f50620) Stream added, broadcasting: 5 I0917 16:57:49.586856 7 log.go:172] (0x8c3ff80) Reply frame received for 5 I0917 16:57:50.664988 7 log.go:172] (0x8c3ff80) Data frame received for 3 I0917 16:57:50.665314 7 log.go:172] (0x7f19ab0) (3) Data frame handling I0917 16:57:50.665598 7 log.go:172] (0x8c3ff80) Data frame received for 5 I0917 16:57:50.665853 7 log.go:172] (0x6f50620) (5) Data frame handling I0917 16:57:50.666015 7 log.go:172] (0x7f19ab0) (3) Data frame sent I0917 16:57:50.666160 7 log.go:172] (0x8c3ff80) Data frame received for 3 I0917 16:57:50.666277 7 log.go:172] (0x7f19ab0) (3) Data frame handling I0917 16:57:50.667734 7 log.go:172] (0x8c3ff80) Data frame received for 1 I0917 16:57:50.667873 7 log.go:172] (0x780c930) (1) Data frame handling I0917 16:57:50.668029 7 log.go:172] (0x780c930) (1) Data frame sent I0917 16:57:50.668272 7 log.go:172] (0x8c3ff80) (0x780c930) Stream removed, broadcasting: 1 I0917 16:57:50.668474 7 log.go:172] (0x8c3ff80) Go away received I0917 16:57:50.668861 7 log.go:172] (0x8c3ff80) (0x780c930) Stream removed, broadcasting: 1 I0917 16:57:50.669017 7 log.go:172] (0x8c3ff80) (0x7f19ab0) Stream removed, broadcasting: 3 I0917 16:57:50.669163 7 log.go:172] (0x8c3ff80) (0x6f50620) Stream removed, broadcasting: 5 Sep 17 16:57:50.669: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 16:57:50.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8" for this suite. • [SLOW TEST:28.697 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 16:57:50.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-2511eb5d-218a-40a2-9bd4-f007a6c4e8ea in namespace container-probe-3528 Sep 17 16:57:54.809: INFO: Started pod busybox-2511eb5d-218a-40a2-9bd4-f007a6c4e8ea in namespace container-probe-3528 STEP: checking the pod's current state and verifying that restartCount is present Sep 17 16:57:54.815: INFO: Initial restart count of pod busybox-2511eb5d-218a-40a2-9bd4-f007a6c4e8ea is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:01:55.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3528" for this suite. • [SLOW TEST:245.306 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:01:55.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-1a8e1670-b406-49f9-8583-85b5c8111c4e STEP: Creating secret with name s-test-opt-upd-ae050180-2146-4d53-ae5e-4baa7a6e8524 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1a8e1670-b406-49f9-8583-85b5c8111c4e STEP: Updating secret s-test-opt-upd-ae050180-2146-4d53-ae5e-4baa7a6e8524 STEP: Creating secret with name s-test-opt-create-03b510b5-b296-422b-88af-40e081adc19e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:02:04.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6139" for this suite. • [SLOW TEST:8.297 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1745,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:02:04.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Sep 17 17:02:04.361: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 17 17:02:04.378: INFO: Waiting for terminating namespaces to be deleted... Sep 17 17:02:04.398: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Sep 17 17:02:04.425: INFO: kindnet-m6c7w from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 17:02:04.425: INFO: Container kindnet-cni ready: true, restart count 0 Sep 17 17:02:04.425: INFO: kube-proxy-4jmbs from kube-system started at 2020-09-13 16:54:28 +0000 UTC (1 container statuses recorded) Sep 17 17:02:04.425: INFO: Container kube-proxy ready: true, restart count 0 Sep 17 17:02:04.425: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Sep 17 17:02:04.439: INFO: kindnet-4ckzg from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 17:02:04.439: INFO: Container kindnet-cni ready: true, restart count 0 Sep 17 17:02:04.439: INFO: pod-projected-secrets-4fbf61a2-4f24-44d8-a117-0ea20276e03d from projected-6139 started at 2020-09-17 17:01:56 +0000 UTC (3 container statuses recorded) Sep 17 17:02:04.439: INFO: Container creates-volume-test ready: true, restart count 0 Sep 17 17:02:04.439: INFO: Container dels-volume-test ready: true, restart count 0 Sep 17 17:02:04.439: INFO: Container upds-volume-test ready: true, restart count 0 Sep 17 17:02:04.439: INFO: kube-proxy-2w9xp from kube-system started at 2020-09-13 16:54:31 +0000 UTC (1 container statuses recorded) Sep 17 17:02:04.439: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a28711b1-38cd-40b7-b3db-8dc1ddd9b2be 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a28711b1-38cd-40b7-b3db-8dc1ddd9b2be off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a28711b1-38cd-40b7-b3db-8dc1ddd9b2be [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:02:20.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2417" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.473 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":105,"skipped":1747,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:02:20.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-1f18a433-28b5-4611-8192-135a74d3d581 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:02:20.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2300" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":106,"skipped":1750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:02:20.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Sep 17 17:02:21.196: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3429" to be "success or failure" Sep 17 17:02:21.239: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 43.284606ms Sep 17 17:02:23.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068262048s Sep 17 17:02:25.274: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077664074s Sep 17 17:02:27.288: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092227038s STEP: Saw pod success Sep 17 17:02:27.288: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Sep 17 17:02:27.466: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Sep 17 17:02:27.646: INFO: Waiting for pod pod-host-path-test to disappear Sep 17 17:02:27.675: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:02:27.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3429" for this suite. • [SLOW TEST:6.692 seconds] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:02:27.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Sep 17 17:02:28.182: INFO: Waiting up to 5m0s for pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f" in namespace "downward-api-8370" to be "success or failure" Sep 17 17:02:28.310: INFO: Pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f": Phase="Pending", Reason="", readiness=false. Elapsed: 128.166385ms Sep 17 17:02:30.328: INFO: Pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145996329s Sep 17 17:02:32.335: INFO: Pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f": Phase="Running", Reason="", readiness=true. Elapsed: 4.153405159s Sep 17 17:02:34.342: INFO: Pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160353636s STEP: Saw pod success Sep 17 17:02:34.343: INFO: Pod "downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f" satisfied condition "success or failure" Sep 17 17:02:34.348: INFO: Trying to get logs from node jerma-worker2 pod downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f container dapi-container: STEP: delete the pod Sep 17 17:02:34.372: INFO: Waiting for pod downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f to disappear Sep 17 17:02:34.376: INFO: Pod downward-api-7b4f6632-6fb8-47e1-bfb2-bab69c9da79f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:02:34.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8370" for this suite. • [SLOW TEST:6.714 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1802,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:02:34.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 17:02:48.199: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 17:02:50.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958968, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958968, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958968, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735958968, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 17:02:53.403: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:03:03.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8312" for this suite. STEP: Destroying namespace "webhook-8312-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:29.288 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":109,"skipped":1806,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:03:03.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-94b6ed2c-a7fa-4e8c-bd75-527cf597efc3 STEP: Creating secret with name s-test-opt-upd-31ca3c55-1a3b-4980-8fd9-339ef18bf0d7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-94b6ed2c-a7fa-4e8c-bd75-527cf597efc3 STEP: Updating secret s-test-opt-upd-31ca3c55-1a3b-4980-8fd9-339ef18bf0d7 STEP: Creating secret with name s-test-opt-create-7aa38f68-a1cd-4d1c-a22c-f5bbe01dc5b2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:04:42.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9305" for this suite. • [SLOW TEST:99.026 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:04:42.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:04:42.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Sep 17 17:04:43.892: INFO: stderr: "" Sep 17 17:04:43.893: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:04:43.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-454" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":111,"skipped":1865,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:04:43.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9697 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9697 STEP: Creating statefulset with conflicting port in namespace statefulset-9697 STEP: Waiting until pod test-pod will start running in namespace statefulset-9697 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9697 Sep 17 17:04:48.098: INFO: Observed stateful pod in namespace: statefulset-9697, name: ss-0, uid: 3f53d144-125d-4895-af05-498120ac0150, status phase: Pending. Waiting for statefulset controller to delete. Sep 17 17:04:48.479: INFO: Observed stateful pod in namespace: statefulset-9697, name: ss-0, uid: 3f53d144-125d-4895-af05-498120ac0150, status phase: Failed. Waiting for statefulset controller to delete. Sep 17 17:04:48.674: INFO: Observed stateful pod in namespace: statefulset-9697, name: ss-0, uid: 3f53d144-125d-4895-af05-498120ac0150, status phase: Failed. Waiting for statefulset controller to delete. Sep 17 17:04:48.703: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9697 STEP: Removing pod with conflicting port in namespace statefulset-9697 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9697 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 17:04:53.395: INFO: Deleting all statefulset in ns statefulset-9697 Sep 17 17:04:53.422: INFO: Scaling statefulset ss to 0 Sep 17 17:05:13.457: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:05:13.462: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:05:13.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9697" for this suite. • [SLOW TEST:29.567 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":112,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:05:13.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 17 17:05:13.578: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 17 17:06:17.124: INFO: >>> kubeConfig: /root/.kube/config Sep 17 17:06:26.659: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:07:30.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2935" for this suite. • [SLOW TEST:136.715 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":113,"skipped":1887,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:07:30.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:07:30.315: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2ca4d4f6-29b4-4abe-882c-69cd8bfb7dbf" in namespace "security-context-test-8640" to be "success or failure" Sep 17 17:07:30.325: INFO: Pod "busybox-user-65534-2ca4d4f6-29b4-4abe-882c-69cd8bfb7dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.72561ms Sep 17 17:07:32.331: INFO: Pod "busybox-user-65534-2ca4d4f6-29b4-4abe-882c-69cd8bfb7dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01597003s Sep 17 17:07:34.338: INFO: Pod "busybox-user-65534-2ca4d4f6-29b4-4abe-882c-69cd8bfb7dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022483272s Sep 17 17:07:34.338: INFO: Pod "busybox-user-65534-2ca4d4f6-29b4-4abe-882c-69cd8bfb7dbf" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:07:34.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8640" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1890,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:07:34.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-646 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 17 17:07:34.430: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 17 17:07:58.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.30:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-646 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 17:07:58.596: INFO: >>> kubeConfig: /root/.kube/config I0917 17:07:58.700030 7 log.go:172] (0xa02c1c0) (0xa02c230) Create stream I0917 17:07:58.700220 7 log.go:172] (0xa02c1c0) (0xa02c230) Stream added, broadcasting: 1 I0917 17:07:58.703838 7 log.go:172] (0xa02c1c0) Reply frame received for 1 I0917 17:07:58.704093 7 log.go:172] (0xa02c1c0) (0x9aeb730) Create stream I0917 17:07:58.704286 7 log.go:172] (0xa02c1c0) (0x9aeb730) Stream added, broadcasting: 3 I0917 17:07:58.705956 7 log.go:172] (0xa02c1c0) Reply frame received for 3 I0917 17:07:58.706104 7 log.go:172] (0xa02c1c0) (0x9578150) Create stream I0917 17:07:58.706182 7 log.go:172] (0xa02c1c0) (0x9578150) Stream added, broadcasting: 5 I0917 17:07:58.707729 7 log.go:172] (0xa02c1c0) Reply frame received for 5 I0917 17:07:58.773594 7 log.go:172] (0xa02c1c0) Data frame received for 5 I0917 17:07:58.773742 7 log.go:172] (0x9578150) (5) Data frame handling I0917 17:07:58.773929 7 log.go:172] (0xa02c1c0) Data frame received for 3 I0917 17:07:58.774080 7 log.go:172] (0x9aeb730) (3) Data frame handling I0917 17:07:58.774227 7 log.go:172] (0x9aeb730) (3) Data frame sent I0917 17:07:58.774331 7 log.go:172] (0xa02c1c0) Data frame received for 3 I0917 17:07:58.774423 7 log.go:172] (0x9aeb730) (3) Data frame handling I0917 17:07:58.774803 7 log.go:172] (0xa02c1c0) Data frame received for 1 I0917 17:07:58.774914 7 log.go:172] (0xa02c230) (1) Data frame handling I0917 17:07:58.775039 7 log.go:172] (0xa02c230) (1) Data frame sent I0917 17:07:58.775157 7 log.go:172] (0xa02c1c0) (0xa02c230) Stream removed, broadcasting: 1 I0917 17:07:58.775291 7 log.go:172] (0xa02c1c0) Go away received I0917 17:07:58.775968 7 log.go:172] (0xa02c1c0) (0xa02c230) Stream removed, broadcasting: 1 I0917 17:07:58.776157 7 log.go:172] (0xa02c1c0) (0x9aeb730) Stream removed, broadcasting: 3 I0917 17:07:58.776292 7 log.go:172] (0xa02c1c0) (0x9578150) Stream removed, broadcasting: 5 Sep 17 17:07:58.776: INFO: Found all expected endpoints: [netserver-0] Sep 17 17:07:58.781: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.239:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-646 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 17:07:58.781: INFO: >>> kubeConfig: /root/.kube/config I0917 17:07:58.889632 7 log.go:172] (0xa02c770) (0xa02c7e0) Create stream I0917 17:07:58.889766 7 log.go:172] (0xa02c770) (0xa02c7e0) Stream added, broadcasting: 1 I0917 17:07:58.892986 7 log.go:172] (0xa02c770) Reply frame received for 1 I0917 17:07:58.893176 7 log.go:172] (0xa02c770) (0xa0761c0) Create stream I0917 17:07:58.893251 7 log.go:172] (0xa02c770) (0xa0761c0) Stream added, broadcasting: 3 I0917 17:07:58.894489 7 log.go:172] (0xa02c770) Reply frame received for 3 I0917 17:07:58.894620 7 log.go:172] (0xa02c770) (0x9aeb960) Create stream I0917 17:07:58.894706 7 log.go:172] (0xa02c770) (0x9aeb960) Stream added, broadcasting: 5 I0917 17:07:58.896024 7 log.go:172] (0xa02c770) Reply frame received for 5 I0917 17:07:58.967427 7 log.go:172] (0xa02c770) Data frame received for 3 I0917 17:07:58.967640 7 log.go:172] (0xa0761c0) (3) Data frame handling I0917 17:07:58.967761 7 log.go:172] (0xa0761c0) (3) Data frame sent I0917 17:07:58.967920 7 log.go:172] (0xa02c770) Data frame received for 5 I0917 17:07:58.968256 7 log.go:172] (0xa02c770) Data frame received for 3 I0917 17:07:58.968336 7 log.go:172] (0xa0761c0) (3) Data frame handling I0917 17:07:58.968427 7 log.go:172] (0x9aeb960) (5) Data frame handling I0917 17:07:58.969531 7 log.go:172] (0xa02c770) Data frame received for 1 I0917 17:07:58.969612 7 log.go:172] (0xa02c7e0) (1) Data frame handling I0917 17:07:58.969687 7 log.go:172] (0xa02c7e0) (1) Data frame sent I0917 17:07:58.969770 7 log.go:172] (0xa02c770) (0xa02c7e0) Stream removed, broadcasting: 1 I0917 17:07:58.969866 7 log.go:172] (0xa02c770) Go away received I0917 17:07:58.970349 7 log.go:172] (0xa02c770) (0xa02c7e0) Stream removed, broadcasting: 1 I0917 17:07:58.970521 7 log.go:172] (0xa02c770) (0xa0761c0) Stream removed, broadcasting: 3 I0917 17:07:58.970656 7 log.go:172] (0xa02c770) (0x9aeb960) Stream removed, broadcasting: 5 Sep 17 17:07:58.970: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:07:58.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-646" for this suite. • [SLOW TEST:24.629 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1897,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:07:58.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1213 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Sep 17 17:07:59.142: INFO: Found 0 stateful pods, waiting for 3 Sep 17 17:08:09.149: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:08:09.149: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:08:09.149: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 17 17:08:19.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:08:19.151: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:08:19.151: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:08:19.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1213 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:08:23.648: INFO: stderr: "I0917 17:08:23.506779 1880 log.go:172] (0x2b03f10) (0x2b03f80) Create stream\nI0917 17:08:23.508219 1880 log.go:172] (0x2b03f10) (0x2b03f80) Stream added, broadcasting: 1\nI0917 17:08:23.516524 1880 log.go:172] (0x2b03f10) Reply frame received for 1\nI0917 17:08:23.516948 1880 log.go:172] (0x2b03f10) (0x2972150) Create stream\nI0917 17:08:23.517014 1880 log.go:172] (0x2b03f10) (0x2972150) Stream added, broadcasting: 3\nI0917 17:08:23.518244 1880 log.go:172] (0x2b03f10) Reply frame received for 3\nI0917 17:08:23.518444 1880 log.go:172] (0x2b03f10) (0x28102a0) Create stream\nI0917 17:08:23.518499 1880 log.go:172] (0x2b03f10) (0x28102a0) Stream added, broadcasting: 5\nI0917 17:08:23.519760 1880 log.go:172] (0x2b03f10) Reply frame received for 5\nI0917 17:08:23.601454 1880 log.go:172] (0x2b03f10) Data frame received for 5\nI0917 17:08:23.601739 1880 log.go:172] (0x28102a0) (5) Data frame handling\nI0917 17:08:23.602190 1880 log.go:172] (0x28102a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:08:23.631913 1880 log.go:172] (0x2b03f10) Data frame received for 3\nI0917 17:08:23.632283 1880 log.go:172] (0x2b03f10) Data frame received for 5\nI0917 17:08:23.632504 1880 log.go:172] (0x28102a0) (5) Data frame handling\nI0917 17:08:23.632794 1880 log.go:172] (0x2972150) (3) Data frame handling\nI0917 17:08:23.633065 1880 log.go:172] (0x2972150) (3) Data frame sent\nI0917 17:08:23.633255 1880 log.go:172] (0x2b03f10) Data frame received for 3\nI0917 17:08:23.633429 1880 log.go:172] (0x2972150) (3) Data frame handling\nI0917 17:08:23.633969 1880 log.go:172] (0x2b03f10) Data frame received for 1\nI0917 17:08:23.634165 1880 log.go:172] (0x2b03f80) (1) Data frame handling\nI0917 17:08:23.634379 1880 log.go:172] (0x2b03f80) (1) Data frame sent\nI0917 17:08:23.636385 1880 log.go:172] (0x2b03f10) (0x2b03f80) Stream removed, broadcasting: 1\nI0917 17:08:23.637077 1880 log.go:172] (0x2b03f10) Go away received\nI0917 17:08:23.641270 1880 log.go:172] (0x2b03f10) (0x2b03f80) Stream removed, broadcasting: 1\nI0917 17:08:23.641428 1880 log.go:172] (0x2b03f10) (0x2972150) Stream removed, broadcasting: 3\nI0917 17:08:23.641554 1880 log.go:172] (0x2b03f10) (0x28102a0) Stream removed, broadcasting: 5\n" Sep 17 17:08:23.649: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:08:23.649: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 17 17:08:33.695: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 17 17:08:43.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1213 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 17:08:45.119: INFO: stderr: "I0917 17:08:45.002872 1911 log.go:172] (0x2705ce0) (0x2705dc0) Create stream\nI0917 17:08:45.007881 1911 log.go:172] (0x2705ce0) (0x2705dc0) Stream added, broadcasting: 1\nI0917 17:08:45.027705 1911 log.go:172] (0x2705ce0) Reply frame received for 1\nI0917 17:08:45.028425 1911 log.go:172] (0x2705ce0) (0x26080e0) Create stream\nI0917 17:08:45.028517 1911 log.go:172] (0x2705ce0) (0x26080e0) Stream added, broadcasting: 3\nI0917 17:08:45.029975 1911 log.go:172] (0x2705ce0) Reply frame received for 3\nI0917 17:08:45.030292 1911 log.go:172] (0x2705ce0) (0x2b70070) Create stream\nI0917 17:08:45.030372 1911 log.go:172] (0x2705ce0) (0x2b70070) Stream added, broadcasting: 5\nI0917 17:08:45.031631 1911 log.go:172] (0x2705ce0) Reply frame received for 5\nI0917 17:08:45.099972 1911 log.go:172] (0x2705ce0) Data frame received for 5\nI0917 17:08:45.100609 1911 log.go:172] (0x2705ce0) Data frame received for 1\nI0917 17:08:45.100847 1911 log.go:172] (0x2705dc0) (1) Data frame handling\nI0917 17:08:45.100989 1911 log.go:172] (0x2705ce0) Data frame received for 3\nI0917 17:08:45.101187 1911 log.go:172] (0x26080e0) (3) Data frame handling\nI0917 17:08:45.101425 1911 log.go:172] (0x2b70070) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 17:08:45.103484 1911 log.go:172] (0x26080e0) (3) Data frame sent\nI0917 17:08:45.103704 1911 log.go:172] (0x2705dc0) (1) Data frame sent\nI0917 17:08:45.104512 1911 log.go:172] (0x2705ce0) Data frame received for 3\nI0917 17:08:45.104740 1911 log.go:172] (0x26080e0) (3) Data frame handling\nI0917 17:08:45.105053 1911 log.go:172] (0x2b70070) (5) Data frame sent\nI0917 17:08:45.105988 1911 log.go:172] (0x2705ce0) Data frame received for 5\nI0917 17:08:45.106115 1911 log.go:172] (0x2b70070) (5) Data frame handling\nI0917 17:08:45.107257 1911 log.go:172] (0x2705ce0) (0x2705dc0) Stream removed, broadcasting: 1\nI0917 17:08:45.108459 1911 log.go:172] (0x2705ce0) Go away received\nI0917 17:08:45.111275 1911 log.go:172] (0x2705ce0) (0x2705dc0) Stream removed, broadcasting: 1\nI0917 17:08:45.111477 1911 log.go:172] (0x2705ce0) (0x26080e0) Stream removed, broadcasting: 3\nI0917 17:08:45.111650 1911 log.go:172] (0x2705ce0) (0x2b70070) Stream removed, broadcasting: 5\n" Sep 17 17:08:45.120: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 17:08:45.121: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 17:08:55.160: INFO: Waiting for StatefulSet statefulset-1213/ss2 to complete update Sep 17 17:08:55.161: INFO: Waiting for Pod statefulset-1213/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 17 17:08:55.161: INFO: Waiting for Pod statefulset-1213/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 17 17:08:55.161: INFO: Waiting for Pod statefulset-1213/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 17 17:09:05.176: INFO: Waiting for StatefulSet statefulset-1213/ss2 to complete update Sep 17 17:09:05.177: INFO: Waiting for Pod statefulset-1213/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 17 17:09:15.174: INFO: Waiting for StatefulSet statefulset-1213/ss2 to complete update Sep 17 17:09:15.175: INFO: Waiting for Pod statefulset-1213/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Sep 17 17:09:25.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1213 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:09:26.592: INFO: stderr: "I0917 17:09:26.428265 1934 log.go:172] (0x2831dc0) (0x2831e30) Create stream\nI0917 17:09:26.430836 1934 log.go:172] (0x2831dc0) (0x2831e30) Stream added, broadcasting: 1\nI0917 17:09:26.446094 1934 log.go:172] (0x2831dc0) Reply frame received for 1\nI0917 17:09:26.446585 1934 log.go:172] (0x2831dc0) (0x2634150) Create stream\nI0917 17:09:26.446660 1934 log.go:172] (0x2831dc0) (0x2634150) Stream added, broadcasting: 3\nI0917 17:09:26.448236 1934 log.go:172] (0x2831dc0) Reply frame received for 3\nI0917 17:09:26.448642 1934 log.go:172] (0x2831dc0) (0x24b40e0) Create stream\nI0917 17:09:26.448744 1934 log.go:172] (0x2831dc0) (0x24b40e0) Stream added, broadcasting: 5\nI0917 17:09:26.450189 1934 log.go:172] (0x2831dc0) Reply frame received for 5\nI0917 17:09:26.545538 1934 log.go:172] (0x2831dc0) Data frame received for 5\nI0917 17:09:26.545857 1934 log.go:172] (0x24b40e0) (5) Data frame handling\nI0917 17:09:26.546621 1934 log.go:172] (0x24b40e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:09:26.578725 1934 log.go:172] (0x2831dc0) Data frame received for 5\nI0917 17:09:26.578848 1934 log.go:172] (0x24b40e0) (5) Data frame handling\nI0917 17:09:26.579091 1934 log.go:172] (0x2831dc0) Data frame received for 3\nI0917 17:09:26.579333 1934 log.go:172] (0x2634150) (3) Data frame handling\nI0917 17:09:26.579536 1934 log.go:172] (0x2634150) (3) Data frame sent\nI0917 17:09:26.579638 1934 log.go:172] (0x2831dc0) Data frame received for 3\nI0917 17:09:26.579707 1934 log.go:172] (0x2634150) (3) Data frame handling\nI0917 17:09:26.580096 1934 log.go:172] (0x2831dc0) Data frame received for 1\nI0917 17:09:26.580271 1934 log.go:172] (0x2831e30) (1) Data frame handling\nI0917 17:09:26.580364 1934 log.go:172] (0x2831e30) (1) Data frame sent\nI0917 17:09:26.581045 1934 log.go:172] (0x2831dc0) (0x2831e30) Stream removed, broadcasting: 1\nI0917 17:09:26.582902 1934 log.go:172] (0x2831dc0) Go away received\nI0917 17:09:26.585158 1934 log.go:172] (0x2831dc0) (0x2831e30) Stream removed, broadcasting: 1\nI0917 17:09:26.585334 1934 log.go:172] (0x2831dc0) (0x2634150) Stream removed, broadcasting: 3\nI0917 17:09:26.585476 1934 log.go:172] (0x2831dc0) (0x24b40e0) Stream removed, broadcasting: 5\n" Sep 17 17:09:26.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:09:26.593: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 17:09:36.653: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 17 17:09:46.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1213 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 17:09:48.070: INFO: stderr: "I0917 17:09:47.943034 1958 log.go:172] (0x28f2000) (0x28f2070) Create stream\nI0917 17:09:47.948198 1958 log.go:172] (0x28f2000) (0x28f2070) Stream added, broadcasting: 1\nI0917 17:09:47.964697 1958 log.go:172] (0x28f2000) Reply frame received for 1\nI0917 17:09:47.965148 1958 log.go:172] (0x28f2000) (0x252f500) Create stream\nI0917 17:09:47.965218 1958 log.go:172] (0x28f2000) (0x252f500) Stream added, broadcasting: 3\nI0917 17:09:47.966504 1958 log.go:172] (0x28f2000) Reply frame received for 3\nI0917 17:09:47.966758 1958 log.go:172] (0x28f2000) (0x24a8ee0) Create stream\nI0917 17:09:47.966828 1958 log.go:172] (0x28f2000) (0x24a8ee0) Stream added, broadcasting: 5\nI0917 17:09:47.968066 1958 log.go:172] (0x28f2000) Reply frame received for 5\nI0917 17:09:48.047940 1958 log.go:172] (0x28f2000) Data frame received for 3\nI0917 17:09:48.048362 1958 log.go:172] (0x28f2000) Data frame received for 5\nI0917 17:09:48.048599 1958 log.go:172] (0x24a8ee0) (5) Data frame handling\nI0917 17:09:48.048834 1958 log.go:172] (0x28f2000) Data frame received for 1\nI0917 17:09:48.048996 1958 log.go:172] (0x28f2070) (1) Data frame handling\nI0917 17:09:48.049257 1958 log.go:172] (0x252f500) (3) Data frame handling\nI0917 17:09:48.050419 1958 log.go:172] (0x28f2070) (1) Data frame sent\nI0917 17:09:48.050748 1958 log.go:172] (0x252f500) (3) Data frame sent\nI0917 17:09:48.050902 1958 log.go:172] (0x28f2000) Data frame received for 3\nI0917 17:09:48.051034 1958 log.go:172] (0x252f500) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 17:09:48.051292 1958 log.go:172] (0x24a8ee0) (5) Data frame sent\nI0917 17:09:48.051498 1958 log.go:172] (0x28f2000) Data frame received for 5\nI0917 17:09:48.052908 1958 log.go:172] (0x28f2000) (0x28f2070) Stream removed, broadcasting: 1\nI0917 17:09:48.057994 1958 log.go:172] (0x24a8ee0) (5) Data frame handling\nI0917 17:09:48.058325 1958 log.go:172] (0x28f2000) Go away received\nI0917 17:09:48.060348 1958 log.go:172] (0x28f2000) (0x28f2070) Stream removed, broadcasting: 1\nI0917 17:09:48.060719 1958 log.go:172] (0x28f2000) (0x252f500) Stream removed, broadcasting: 3\nI0917 17:09:48.061186 1958 log.go:172] (0x28f2000) (0x24a8ee0) Stream removed, broadcasting: 5\n" Sep 17 17:09:48.071: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 17:09:48.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 17:09:58.107: INFO: Waiting for StatefulSet statefulset-1213/ss2 to complete update Sep 17 17:09:58.108: INFO: Waiting for Pod statefulset-1213/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 17 17:09:58.108: INFO: Waiting for Pod statefulset-1213/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 17 17:10:08.156: INFO: Waiting for StatefulSet statefulset-1213/ss2 to complete update Sep 17 17:10:08.156: INFO: Waiting for Pod statefulset-1213/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 17:10:18.124: INFO: Deleting all statefulset in ns statefulset-1213 Sep 17 17:10:18.129: INFO: Scaling statefulset ss2 to 0 Sep 17 17:10:38.170: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:10:38.175: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:10:38.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1213" for this suite. • [SLOW TEST:159.221 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":116,"skipped":1899,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:10:38.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:10:38.299: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2729 I0917 17:10:38.358301 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2729, replica count: 1 I0917 17:10:39.410060 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0917 17:10:40.410779 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0917 17:10:41.411764 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 17 17:10:41.553: INFO: Created: latency-svc-gn7f6 Sep 17 17:10:41.596: INFO: Got endpoints: latency-svc-gn7f6 [81.937817ms] Sep 17 17:10:41.635: INFO: Created: latency-svc-cdc8z Sep 17 17:10:41.660: INFO: Got endpoints: latency-svc-cdc8z [62.504562ms] Sep 17 17:10:41.677: INFO: Created: latency-svc-xhq2d Sep 17 17:10:41.732: INFO: Got endpoints: latency-svc-xhq2d [135.354504ms] Sep 17 17:10:41.733: INFO: Created: latency-svc-2jllg Sep 17 17:10:41.750: INFO: Got endpoints: latency-svc-2jllg [151.995929ms] Sep 17 17:10:41.779: INFO: Created: latency-svc-d6q45 Sep 17 17:10:41.797: INFO: Got endpoints: latency-svc-d6q45 [200.025882ms] Sep 17 17:10:41.823: INFO: Created: latency-svc-gt2xp Sep 17 17:10:41.882: INFO: Got endpoints: latency-svc-gt2xp [284.636886ms] Sep 17 17:10:41.883: INFO: Created: latency-svc-vssgt Sep 17 17:10:41.889: INFO: Got endpoints: latency-svc-vssgt [291.059672ms] Sep 17 17:10:41.917: INFO: Created: latency-svc-dqcbf Sep 17 17:10:41.938: INFO: Got endpoints: latency-svc-dqcbf [340.533509ms] Sep 17 17:10:41.959: INFO: Created: latency-svc-8j5s8 Sep 17 17:10:41.975: INFO: Got endpoints: latency-svc-8j5s8 [377.144704ms] Sep 17 17:10:42.031: INFO: Created: latency-svc-j2fdm Sep 17 17:10:42.068: INFO: Created: latency-svc-87c9g Sep 17 17:10:42.068: INFO: Got endpoints: latency-svc-j2fdm [470.236093ms] Sep 17 17:10:42.082: INFO: Got endpoints: latency-svc-87c9g [484.637267ms] Sep 17 17:10:42.110: INFO: Created: latency-svc-5xsbh Sep 17 17:10:42.119: INFO: Got endpoints: latency-svc-5xsbh [521.016206ms] Sep 17 17:10:42.169: INFO: Created: latency-svc-j827d Sep 17 17:10:42.178: INFO: Got endpoints: latency-svc-j827d [581.045677ms] Sep 17 17:10:42.205: INFO: Created: latency-svc-p68gg Sep 17 17:10:42.215: INFO: Got endpoints: latency-svc-p68gg [616.901527ms] Sep 17 17:10:42.242: INFO: Created: latency-svc-qnnv6 Sep 17 17:10:42.363: INFO: Got endpoints: latency-svc-qnnv6 [765.533662ms] Sep 17 17:10:42.364: INFO: Created: latency-svc-ll2mx Sep 17 17:10:42.368: INFO: Got endpoints: latency-svc-ll2mx [767.250167ms] Sep 17 17:10:42.409: INFO: Created: latency-svc-fn9mj Sep 17 17:10:42.423: INFO: Got endpoints: latency-svc-fn9mj [762.828712ms] Sep 17 17:10:42.446: INFO: Created: latency-svc-wh6lf Sep 17 17:10:42.459: INFO: Got endpoints: latency-svc-wh6lf [726.238828ms] Sep 17 17:10:42.528: INFO: Created: latency-svc-cfzm4 Sep 17 17:10:42.533: INFO: Got endpoints: latency-svc-cfzm4 [782.885912ms] Sep 17 17:10:42.596: INFO: Created: latency-svc-77r6x Sep 17 17:10:42.609: INFO: Got endpoints: latency-svc-77r6x [812.042558ms] Sep 17 17:10:42.660: INFO: Created: latency-svc-f4hxw Sep 17 17:10:42.669: INFO: Got endpoints: latency-svc-f4hxw [786.82894ms] Sep 17 17:10:42.693: INFO: Created: latency-svc-nxg9z Sep 17 17:10:42.736: INFO: Got endpoints: latency-svc-nxg9z [846.93526ms] Sep 17 17:10:42.810: INFO: Created: latency-svc-mqlfx Sep 17 17:10:42.815: INFO: Got endpoints: latency-svc-mqlfx [876.452789ms] Sep 17 17:10:42.873: INFO: Created: latency-svc-2pkc7 Sep 17 17:10:42.899: INFO: Got endpoints: latency-svc-2pkc7 [924.103737ms] Sep 17 17:10:42.965: INFO: Created: latency-svc-gg2lj Sep 17 17:10:42.981: INFO: Got endpoints: latency-svc-gg2lj [912.211697ms] Sep 17 17:10:43.010: INFO: Created: latency-svc-jwz2g Sep 17 17:10:43.039: INFO: Got endpoints: latency-svc-jwz2g [956.49894ms] Sep 17 17:10:43.063: INFO: Created: latency-svc-vt2qt Sep 17 17:10:43.109: INFO: Got endpoints: latency-svc-vt2qt [990.162391ms] Sep 17 17:10:43.122: INFO: Created: latency-svc-gflzg Sep 17 17:10:43.154: INFO: Got endpoints: latency-svc-gflzg [975.131703ms] Sep 17 17:10:43.183: INFO: Created: latency-svc-8pblz Sep 17 17:10:43.196: INFO: Got endpoints: latency-svc-8pblz [980.170244ms] Sep 17 17:10:43.277: INFO: Created: latency-svc-mzlb9 Sep 17 17:10:43.282: INFO: Got endpoints: latency-svc-mzlb9 [919.315661ms] Sep 17 17:10:43.309: INFO: Created: latency-svc-fwt2k Sep 17 17:10:43.322: INFO: Got endpoints: latency-svc-fwt2k [953.301335ms] Sep 17 17:10:43.345: INFO: Created: latency-svc-9dcw5 Sep 17 17:10:43.358: INFO: Got endpoints: latency-svc-9dcw5 [75.422972ms] Sep 17 17:10:43.421: INFO: Created: latency-svc-m4mkd Sep 17 17:10:43.423: INFO: Got endpoints: latency-svc-m4mkd [1.000037558s] Sep 17 17:10:43.452: INFO: Created: latency-svc-7m2cf Sep 17 17:10:43.471: INFO: Got endpoints: latency-svc-7m2cf [1.011461896s] Sep 17 17:10:43.489: INFO: Created: latency-svc-44ks2 Sep 17 17:10:43.506: INFO: Got endpoints: latency-svc-44ks2 [972.973428ms] Sep 17 17:10:43.593: INFO: Created: latency-svc-8s9w2 Sep 17 17:10:43.598: INFO: Got endpoints: latency-svc-8s9w2 [988.239003ms] Sep 17 17:10:43.639: INFO: Created: latency-svc-xhw8c Sep 17 17:10:43.651: INFO: Got endpoints: latency-svc-xhw8c [982.09652ms] Sep 17 17:10:43.674: INFO: Created: latency-svc-ls82f Sep 17 17:10:43.739: INFO: Created: latency-svc-hkmwz Sep 17 17:10:43.740: INFO: Got endpoints: latency-svc-ls82f [1.002918509s] Sep 17 17:10:43.743: INFO: Got endpoints: latency-svc-hkmwz [928.357424ms] Sep 17 17:10:43.764: INFO: Created: latency-svc-7mlwl Sep 17 17:10:43.794: INFO: Got endpoints: latency-svc-7mlwl [895.164351ms] Sep 17 17:10:43.830: INFO: Created: latency-svc-pzb7x Sep 17 17:10:43.870: INFO: Got endpoints: latency-svc-pzb7x [889.082375ms] Sep 17 17:10:43.902: INFO: Created: latency-svc-8rclb Sep 17 17:10:43.917: INFO: Got endpoints: latency-svc-8rclb [877.932662ms] Sep 17 17:10:43.944: INFO: Created: latency-svc-kkzlx Sep 17 17:10:43.961: INFO: Got endpoints: latency-svc-kkzlx [851.913615ms] Sep 17 17:10:44.016: INFO: Created: latency-svc-9h7mh Sep 17 17:10:44.033: INFO: Got endpoints: latency-svc-9h7mh [879.564521ms] Sep 17 17:10:44.076: INFO: Created: latency-svc-twvwx Sep 17 17:10:44.087: INFO: Got endpoints: latency-svc-twvwx [890.88574ms] Sep 17 17:10:44.145: INFO: Created: latency-svc-tkzrm Sep 17 17:10:44.167: INFO: Created: latency-svc-n758l Sep 17 17:10:44.167: INFO: Got endpoints: latency-svc-tkzrm [845.069236ms] Sep 17 17:10:44.183: INFO: Got endpoints: latency-svc-n758l [825.215147ms] Sep 17 17:10:44.208: INFO: Created: latency-svc-crzb6 Sep 17 17:10:44.239: INFO: Got endpoints: latency-svc-crzb6 [815.067125ms] Sep 17 17:10:44.302: INFO: Created: latency-svc-t5qpw Sep 17 17:10:44.310: INFO: Got endpoints: latency-svc-t5qpw [839.157252ms] Sep 17 17:10:44.334: INFO: Created: latency-svc-8z5fg Sep 17 17:10:44.358: INFO: Got endpoints: latency-svc-8z5fg [851.286699ms] Sep 17 17:10:44.394: INFO: Created: latency-svc-5cqqj Sep 17 17:10:44.498: INFO: Created: latency-svc-d6hdl Sep 17 17:10:44.499: INFO: Got endpoints: latency-svc-5cqqj [900.655522ms] Sep 17 17:10:44.508: INFO: Got endpoints: latency-svc-d6hdl [856.750903ms] Sep 17 17:10:44.631: INFO: Created: latency-svc-xkx6m Sep 17 17:10:44.871: INFO: Created: latency-svc-wb7z2 Sep 17 17:10:44.872: INFO: Got endpoints: latency-svc-xkx6m [1.131934013s] Sep 17 17:10:44.876: INFO: Got endpoints: latency-svc-wb7z2 [1.132972025s] Sep 17 17:10:44.934: INFO: Created: latency-svc-cw5ft Sep 17 17:10:44.947: INFO: Got endpoints: latency-svc-cw5ft [1.152052878s] Sep 17 17:10:45.013: INFO: Created: latency-svc-8r5s9 Sep 17 17:10:45.020: INFO: Got endpoints: latency-svc-8r5s9 [1.15016009s] Sep 17 17:10:45.072: INFO: Created: latency-svc-5wbtb Sep 17 17:10:45.093: INFO: Got endpoints: latency-svc-5wbtb [1.175476039s] Sep 17 17:10:45.197: INFO: Created: latency-svc-vgxx5 Sep 17 17:10:45.234: INFO: Got endpoints: latency-svc-vgxx5 [1.272758236s] Sep 17 17:10:45.319: INFO: Created: latency-svc-qr7lc Sep 17 17:10:45.321: INFO: Got endpoints: latency-svc-qr7lc [1.287215246s] Sep 17 17:10:45.353: INFO: Created: latency-svc-f7nwt Sep 17 17:10:45.368: INFO: Got endpoints: latency-svc-f7nwt [1.280894683s] Sep 17 17:10:45.390: INFO: Created: latency-svc-lfm6v Sep 17 17:10:45.402: INFO: Got endpoints: latency-svc-lfm6v [1.234480318s] Sep 17 17:10:45.524: INFO: Created: latency-svc-qvgdr Sep 17 17:10:45.551: INFO: Got endpoints: latency-svc-qvgdr [1.367975382s] Sep 17 17:10:45.553: INFO: Created: latency-svc-48vxr Sep 17 17:10:45.581: INFO: Got endpoints: latency-svc-48vxr [1.342619189s] Sep 17 17:10:45.617: INFO: Created: latency-svc-4hckr Sep 17 17:10:45.666: INFO: Got endpoints: latency-svc-4hckr [1.356213678s] Sep 17 17:10:45.677: INFO: Created: latency-svc-7hckw Sep 17 17:10:45.691: INFO: Got endpoints: latency-svc-7hckw [1.333167242s] Sep 17 17:10:45.719: INFO: Created: latency-svc-hqlw5 Sep 17 17:10:45.733: INFO: Got endpoints: latency-svc-hqlw5 [1.234327792s] Sep 17 17:10:45.883: INFO: Created: latency-svc-jkms4 Sep 17 17:10:45.889: INFO: Got endpoints: latency-svc-jkms4 [1.380896953s] Sep 17 17:10:45.911: INFO: Created: latency-svc-brdfv Sep 17 17:10:45.926: INFO: Got endpoints: latency-svc-brdfv [1.053517636s] Sep 17 17:10:45.960: INFO: Created: latency-svc-mzgr6 Sep 17 17:10:46.086: INFO: Got endpoints: latency-svc-mzgr6 [1.209715782s] Sep 17 17:10:46.096: INFO: Created: latency-svc-khzn5 Sep 17 17:10:46.106: INFO: Got endpoints: latency-svc-khzn5 [1.15875622s] Sep 17 17:10:46.135: INFO: Created: latency-svc-6ldfj Sep 17 17:10:46.150: INFO: Got endpoints: latency-svc-6ldfj [1.129416227s] Sep 17 17:10:46.218: INFO: Created: latency-svc-5w6rg Sep 17 17:10:46.219: INFO: Got endpoints: latency-svc-5w6rg [1.12615008s] Sep 17 17:10:46.247: INFO: Created: latency-svc-9x7cj Sep 17 17:10:46.262: INFO: Got endpoints: latency-svc-9x7cj [1.027635069s] Sep 17 17:10:46.282: INFO: Created: latency-svc-69skn Sep 17 17:10:46.312: INFO: Got endpoints: latency-svc-69skn [991.307411ms] Sep 17 17:10:46.388: INFO: Created: latency-svc-khsln Sep 17 17:10:46.396: INFO: Got endpoints: latency-svc-khsln [1.028379303s] Sep 17 17:10:46.421: INFO: Created: latency-svc-qrx7j Sep 17 17:10:46.450: INFO: Got endpoints: latency-svc-qrx7j [1.048198099s] Sep 17 17:10:46.536: INFO: Created: latency-svc-ww8bd Sep 17 17:10:46.540: INFO: Got endpoints: latency-svc-ww8bd [988.554258ms] Sep 17 17:10:46.595: INFO: Created: latency-svc-hkrmr Sep 17 17:10:46.617: INFO: Got endpoints: latency-svc-hkrmr [1.035864313s] Sep 17 17:10:46.695: INFO: Created: latency-svc-6w9j5 Sep 17 17:10:46.707: INFO: Got endpoints: latency-svc-6w9j5 [1.039891767s] Sep 17 17:10:46.744: INFO: Created: latency-svc-5khlc Sep 17 17:10:46.768: INFO: Got endpoints: latency-svc-5khlc [1.076947431s] Sep 17 17:10:46.846: INFO: Created: latency-svc-gc7f6 Sep 17 17:10:46.851: INFO: Got endpoints: latency-svc-gc7f6 [1.118017214s] Sep 17 17:10:46.882: INFO: Created: latency-svc-6mph2 Sep 17 17:10:46.900: INFO: Got endpoints: latency-svc-6mph2 [1.01075006s] Sep 17 17:10:46.931: INFO: Created: latency-svc-dzj84 Sep 17 17:10:46.944: INFO: Got endpoints: latency-svc-dzj84 [1.018104428s] Sep 17 17:10:47.033: INFO: Created: latency-svc-k27rh Sep 17 17:10:47.048: INFO: Got endpoints: latency-svc-k27rh [961.038634ms] Sep 17 17:10:47.074: INFO: Created: latency-svc-szdhg Sep 17 17:10:47.092: INFO: Got endpoints: latency-svc-szdhg [985.916452ms] Sep 17 17:10:47.167: INFO: Created: latency-svc-n2j6k Sep 17 17:10:47.196: INFO: Got endpoints: latency-svc-n2j6k [1.045476764s] Sep 17 17:10:47.196: INFO: Created: latency-svc-tjfgg Sep 17 17:10:47.209: INFO: Got endpoints: latency-svc-tjfgg [990.051983ms] Sep 17 17:10:47.301: INFO: Created: latency-svc-pgbjj Sep 17 17:10:47.317: INFO: Got endpoints: latency-svc-pgbjj [1.055054231s] Sep 17 17:10:47.356: INFO: Created: latency-svc-5hq5c Sep 17 17:10:47.383: INFO: Got endpoints: latency-svc-5hq5c [1.070775927s] Sep 17 17:10:47.475: INFO: Created: latency-svc-fl7lw Sep 17 17:10:47.479: INFO: Got endpoints: latency-svc-fl7lw [1.082585322s] Sep 17 17:10:47.518: INFO: Created: latency-svc-l6hw4 Sep 17 17:10:47.549: INFO: Got endpoints: latency-svc-l6hw4 [1.098730203s] Sep 17 17:10:47.612: INFO: Created: latency-svc-zm6zl Sep 17 17:10:47.751: INFO: Got endpoints: latency-svc-zm6zl [1.210045315s] Sep 17 17:10:47.782: INFO: Created: latency-svc-km6hz Sep 17 17:10:47.796: INFO: Got endpoints: latency-svc-km6hz [1.178666066s] Sep 17 17:10:47.827: INFO: Created: latency-svc-ztqxn Sep 17 17:10:47.838: INFO: Got endpoints: latency-svc-ztqxn [1.13096275s] Sep 17 17:10:47.930: INFO: Created: latency-svc-rnhld Sep 17 17:10:47.933: INFO: Got endpoints: latency-svc-rnhld [1.164694777s] Sep 17 17:10:48.061: INFO: Created: latency-svc-pkzsd Sep 17 17:10:48.076: INFO: Got endpoints: latency-svc-pkzsd [1.224680147s] Sep 17 17:10:48.112: INFO: Created: latency-svc-48scg Sep 17 17:10:48.126: INFO: Got endpoints: latency-svc-48scg [1.225727291s] Sep 17 17:10:48.154: INFO: Created: latency-svc-6zdbj Sep 17 17:10:48.187: INFO: Got endpoints: latency-svc-6zdbj [1.242382584s] Sep 17 17:10:48.214: INFO: Created: latency-svc-l2wpl Sep 17 17:10:48.222: INFO: Got endpoints: latency-svc-l2wpl [1.174549901s] Sep 17 17:10:48.274: INFO: Created: latency-svc-q4bhg Sep 17 17:10:48.307: INFO: Got endpoints: latency-svc-q4bhg [1.214313373s] Sep 17 17:10:48.334: INFO: Created: latency-svc-d5hdt Sep 17 17:10:48.356: INFO: Got endpoints: latency-svc-d5hdt [1.15983149s] Sep 17 17:10:48.387: INFO: Created: latency-svc-5l422 Sep 17 17:10:48.451: INFO: Got endpoints: latency-svc-5l422 [1.241534161s] Sep 17 17:10:48.495: INFO: Created: latency-svc-4g28g Sep 17 17:10:48.511: INFO: Got endpoints: latency-svc-4g28g [1.193719488s] Sep 17 17:10:48.537: INFO: Created: latency-svc-7j62r Sep 17 17:10:48.551: INFO: Got endpoints: latency-svc-7j62r [1.167092919s] Sep 17 17:10:48.602: INFO: Created: latency-svc-lj4bh Sep 17 17:10:48.607: INFO: Got endpoints: latency-svc-lj4bh [1.127741074s] Sep 17 17:10:48.639: INFO: Created: latency-svc-tq65z Sep 17 17:10:48.663: INFO: Got endpoints: latency-svc-tq65z [1.113776878s] Sep 17 17:10:48.694: INFO: Created: latency-svc-hq67p Sep 17 17:10:48.755: INFO: Got endpoints: latency-svc-hq67p [1.00451983s] Sep 17 17:10:48.790: INFO: Created: latency-svc-nppfr Sep 17 17:10:48.807: INFO: Got endpoints: latency-svc-nppfr [1.010116431s] Sep 17 17:10:48.894: INFO: Created: latency-svc-k7zg7 Sep 17 17:10:48.902: INFO: Got endpoints: latency-svc-k7zg7 [1.064083213s] Sep 17 17:10:48.927: INFO: Created: latency-svc-k5t8q Sep 17 17:10:48.957: INFO: Got endpoints: latency-svc-k5t8q [1.023289842s] Sep 17 17:10:49.043: INFO: Created: latency-svc-nlwbx Sep 17 17:10:49.053: INFO: Got endpoints: latency-svc-nlwbx [976.792005ms] Sep 17 17:10:49.132: INFO: Created: latency-svc-6kjpb Sep 17 17:10:49.205: INFO: Got endpoints: latency-svc-6kjpb [1.077992055s] Sep 17 17:10:49.215: INFO: Created: latency-svc-tsw2g Sep 17 17:10:49.228: INFO: Got endpoints: latency-svc-tsw2g [1.040941383s] Sep 17 17:10:49.275: INFO: Created: latency-svc-r25fl Sep 17 17:10:49.287: INFO: Got endpoints: latency-svc-r25fl [1.064313419s] Sep 17 17:10:49.343: INFO: Created: latency-svc-qqfss Sep 17 17:10:49.345: INFO: Got endpoints: latency-svc-qqfss [1.038378944s] Sep 17 17:10:49.425: INFO: Created: latency-svc-cx79k Sep 17 17:10:49.435: INFO: Got endpoints: latency-svc-cx79k [1.078606072s] Sep 17 17:10:49.479: INFO: Created: latency-svc-vn2cd Sep 17 17:10:49.496: INFO: Got endpoints: latency-svc-vn2cd [1.044493086s] Sep 17 17:10:49.533: INFO: Created: latency-svc-gr7bw Sep 17 17:10:49.557: INFO: Got endpoints: latency-svc-gr7bw [1.045512134s] Sep 17 17:10:49.606: INFO: Created: latency-svc-89gkn Sep 17 17:10:49.616: INFO: Got endpoints: latency-svc-89gkn [1.064740247s] Sep 17 17:10:49.665: INFO: Created: latency-svc-gsk25 Sep 17 17:10:49.677: INFO: Got endpoints: latency-svc-gsk25 [1.069367333s] Sep 17 17:10:49.744: INFO: Created: latency-svc-9tg9n Sep 17 17:10:49.755: INFO: Got endpoints: latency-svc-9tg9n [1.09136876s] Sep 17 17:10:49.797: INFO: Created: latency-svc-rtv9c Sep 17 17:10:49.814: INFO: Got endpoints: latency-svc-rtv9c [1.058616039s] Sep 17 17:10:49.881: INFO: Created: latency-svc-bstnp Sep 17 17:10:49.905: INFO: Got endpoints: latency-svc-bstnp [1.097779406s] Sep 17 17:10:49.935: INFO: Created: latency-svc-gsnxp Sep 17 17:10:49.953: INFO: Got endpoints: latency-svc-gsnxp [1.050721584s] Sep 17 17:10:50.020: INFO: Created: latency-svc-6h6rp Sep 17 17:10:50.024: INFO: Got endpoints: latency-svc-6h6rp [1.067579864s] Sep 17 17:10:50.048: INFO: Created: latency-svc-9c7l7 Sep 17 17:10:50.063: INFO: Got endpoints: latency-svc-9c7l7 [1.010044765s] Sep 17 17:10:50.157: INFO: Created: latency-svc-7nm8s Sep 17 17:10:50.182: INFO: Got endpoints: latency-svc-7nm8s [977.25213ms] Sep 17 17:10:50.222: INFO: Created: latency-svc-4xg66 Sep 17 17:10:50.236: INFO: Got endpoints: latency-svc-4xg66 [1.007581685s] Sep 17 17:10:50.301: INFO: Created: latency-svc-djgzh Sep 17 17:10:50.303: INFO: Got endpoints: latency-svc-djgzh [1.01601881s] Sep 17 17:10:50.337: INFO: Created: latency-svc-9sbvj Sep 17 17:10:50.350: INFO: Got endpoints: latency-svc-9sbvj [1.004585465s] Sep 17 17:10:50.373: INFO: Created: latency-svc-kd7gl Sep 17 17:10:50.386: INFO: Got endpoints: latency-svc-kd7gl [950.816937ms] Sep 17 17:10:50.438: INFO: Created: latency-svc-nvfg6 Sep 17 17:10:50.442: INFO: Got endpoints: latency-svc-nvfg6 [945.479463ms] Sep 17 17:10:50.468: INFO: Created: latency-svc-4qdkh Sep 17 17:10:50.483: INFO: Got endpoints: latency-svc-4qdkh [925.273164ms] Sep 17 17:10:50.504: INFO: Created: latency-svc-ghh88 Sep 17 17:10:50.518: INFO: Got endpoints: latency-svc-ghh88 [902.460761ms] Sep 17 17:10:50.570: INFO: Created: latency-svc-xphwx Sep 17 17:10:50.576: INFO: Got endpoints: latency-svc-xphwx [898.939166ms] Sep 17 17:10:50.644: INFO: Created: latency-svc-68j4v Sep 17 17:10:50.657: INFO: Got endpoints: latency-svc-68j4v [902.254945ms] Sep 17 17:10:50.703: INFO: Created: latency-svc-bspp2 Sep 17 17:10:50.709: INFO: Got endpoints: latency-svc-bspp2 [894.140803ms] Sep 17 17:10:50.739: INFO: Created: latency-svc-mx8fp Sep 17 17:10:50.762: INFO: Got endpoints: latency-svc-mx8fp [857.172903ms] Sep 17 17:10:50.836: INFO: Created: latency-svc-zd4nj Sep 17 17:10:50.846: INFO: Got endpoints: latency-svc-zd4nj [892.984463ms] Sep 17 17:10:50.876: INFO: Created: latency-svc-cjhng Sep 17 17:10:50.895: INFO: Got endpoints: latency-svc-cjhng [869.961561ms] Sep 17 17:10:50.924: INFO: Created: latency-svc-jl8bn Sep 17 17:10:50.995: INFO: Got endpoints: latency-svc-jl8bn [931.261223ms] Sep 17 17:10:50.996: INFO: Created: latency-svc-d7mmq Sep 17 17:10:51.026: INFO: Got endpoints: latency-svc-d7mmq [843.6169ms] Sep 17 17:10:51.164: INFO: Created: latency-svc-4btdp Sep 17 17:10:51.171: INFO: Got endpoints: latency-svc-4btdp [934.879275ms] Sep 17 17:10:51.207: INFO: Created: latency-svc-fm7kt Sep 17 17:10:51.225: INFO: Got endpoints: latency-svc-fm7kt [922.061588ms] Sep 17 17:10:51.248: INFO: Created: latency-svc-4vqnh Sep 17 17:10:51.314: INFO: Got endpoints: latency-svc-4vqnh [963.634813ms] Sep 17 17:10:51.314: INFO: Created: latency-svc-tfsdv Sep 17 17:10:51.328: INFO: Got endpoints: latency-svc-tfsdv [941.486685ms] Sep 17 17:10:51.362: INFO: Created: latency-svc-tqqjn Sep 17 17:10:51.376: INFO: Got endpoints: latency-svc-tqqjn [934.564096ms] Sep 17 17:10:51.403: INFO: Created: latency-svc-8dvd5 Sep 17 17:10:51.468: INFO: Got endpoints: latency-svc-8dvd5 [985.676726ms] Sep 17 17:10:51.469: INFO: Created: latency-svc-6nxnm Sep 17 17:10:51.479: INFO: Got endpoints: latency-svc-6nxnm [960.371294ms] Sep 17 17:10:51.506: INFO: Created: latency-svc-ktlrb Sep 17 17:10:51.514: INFO: Got endpoints: latency-svc-ktlrb [937.933525ms] Sep 17 17:10:51.542: INFO: Created: latency-svc-zpfgx Sep 17 17:10:51.550: INFO: Got endpoints: latency-svc-zpfgx [893.240001ms] Sep 17 17:10:51.609: INFO: Created: latency-svc-z2scv Sep 17 17:10:51.612: INFO: Got endpoints: latency-svc-z2scv [902.757843ms] Sep 17 17:10:51.644: INFO: Created: latency-svc-945kj Sep 17 17:10:51.660: INFO: Got endpoints: latency-svc-945kj [897.775539ms] Sep 17 17:10:51.680: INFO: Created: latency-svc-ccrxk Sep 17 17:10:51.695: INFO: Got endpoints: latency-svc-ccrxk [848.958995ms] Sep 17 17:10:51.731: INFO: Created: latency-svc-4t24s Sep 17 17:10:51.764: INFO: Created: latency-svc-49nm4 Sep 17 17:10:51.764: INFO: Got endpoints: latency-svc-4t24s [869.22634ms] Sep 17 17:10:51.788: INFO: Got endpoints: latency-svc-49nm4 [792.347035ms] Sep 17 17:10:51.818: INFO: Created: latency-svc-tlt4n Sep 17 17:10:51.829: INFO: Got endpoints: latency-svc-tlt4n [802.483265ms] Sep 17 17:10:51.875: INFO: Created: latency-svc-66sbg Sep 17 17:10:51.878: INFO: Got endpoints: latency-svc-66sbg [706.96355ms] Sep 17 17:10:51.950: INFO: Created: latency-svc-l9svj Sep 17 17:10:51.973: INFO: Got endpoints: latency-svc-l9svj [746.951517ms] Sep 17 17:10:52.035: INFO: Created: latency-svc-8t4xd Sep 17 17:10:52.058: INFO: Got endpoints: latency-svc-8t4xd [744.328238ms] Sep 17 17:10:52.089: INFO: Created: latency-svc-rfmbn Sep 17 17:10:52.099: INFO: Got endpoints: latency-svc-rfmbn [771.507691ms] Sep 17 17:10:52.124: INFO: Created: latency-svc-vsln5 Sep 17 17:10:52.182: INFO: Got endpoints: latency-svc-vsln5 [805.10399ms] Sep 17 17:10:52.184: INFO: Created: latency-svc-rzsxq Sep 17 17:10:52.202: INFO: Got endpoints: latency-svc-rzsxq [733.305565ms] Sep 17 17:10:52.238: INFO: Created: latency-svc-pl5cl Sep 17 17:10:52.257: INFO: Got endpoints: latency-svc-pl5cl [778.189391ms] Sep 17 17:10:52.281: INFO: Created: latency-svc-rjpdp Sep 17 17:10:52.343: INFO: Got endpoints: latency-svc-rjpdp [828.989658ms] Sep 17 17:10:52.346: INFO: Created: latency-svc-jrmwz Sep 17 17:10:52.370: INFO: Got endpoints: latency-svc-jrmwz [819.414775ms] Sep 17 17:10:52.406: INFO: Created: latency-svc-mvwdv Sep 17 17:10:52.425: INFO: Got endpoints: latency-svc-mvwdv [812.974879ms] Sep 17 17:10:52.482: INFO: Created: latency-svc-r6stw Sep 17 17:10:52.485: INFO: Got endpoints: latency-svc-r6stw [824.689066ms] Sep 17 17:10:52.519: INFO: Created: latency-svc-hmrpf Sep 17 17:10:52.535: INFO: Got endpoints: latency-svc-hmrpf [839.393164ms] Sep 17 17:10:52.567: INFO: Created: latency-svc-wdhx9 Sep 17 17:10:52.612: INFO: Got endpoints: latency-svc-wdhx9 [848.048522ms] Sep 17 17:10:52.629: INFO: Created: latency-svc-nz7jd Sep 17 17:10:52.654: INFO: Got endpoints: latency-svc-nz7jd [865.611348ms] Sep 17 17:10:52.681: INFO: Created: latency-svc-8hcxv Sep 17 17:10:52.708: INFO: Got endpoints: latency-svc-8hcxv [879.213835ms] Sep 17 17:10:52.781: INFO: Created: latency-svc-n9jm7 Sep 17 17:10:52.784: INFO: Got endpoints: latency-svc-n9jm7 [905.860083ms] Sep 17 17:10:52.831: INFO: Created: latency-svc-p2khl Sep 17 17:10:52.853: INFO: Got endpoints: latency-svc-p2khl [879.677733ms] Sep 17 17:10:52.881: INFO: Created: latency-svc-lrkgr Sep 17 17:10:52.929: INFO: Got endpoints: latency-svc-lrkgr [870.521512ms] Sep 17 17:10:52.951: INFO: Created: latency-svc-nr2p2 Sep 17 17:10:52.966: INFO: Got endpoints: latency-svc-nr2p2 [866.440991ms] Sep 17 17:10:52.994: INFO: Created: latency-svc-mhx6p Sep 17 17:10:53.029: INFO: Got endpoints: latency-svc-mhx6p [846.785559ms] Sep 17 17:10:53.087: INFO: Created: latency-svc-48qqp Sep 17 17:10:53.093: INFO: Got endpoints: latency-svc-48qqp [891.058665ms] Sep 17 17:10:53.113: INFO: Created: latency-svc-5w2xf Sep 17 17:10:53.123: INFO: Got endpoints: latency-svc-5w2xf [864.791573ms] Sep 17 17:10:53.149: INFO: Created: latency-svc-2jbz8 Sep 17 17:10:53.173: INFO: Got endpoints: latency-svc-2jbz8 [829.636888ms] Sep 17 17:10:53.234: INFO: Created: latency-svc-7vt7q Sep 17 17:10:53.249: INFO: Got endpoints: latency-svc-7vt7q [878.978015ms] Sep 17 17:10:53.275: INFO: Created: latency-svc-fvzqj Sep 17 17:10:53.297: INFO: Got endpoints: latency-svc-fvzqj [872.101555ms] Sep 17 17:10:53.323: INFO: Created: latency-svc-8bfwp Sep 17 17:10:53.334: INFO: Got endpoints: latency-svc-8bfwp [848.525756ms] Sep 17 17:10:53.403: INFO: Created: latency-svc-lv92c Sep 17 17:10:53.407: INFO: Got endpoints: latency-svc-lv92c [872.356135ms] Sep 17 17:10:53.432: INFO: Created: latency-svc-6vr7g Sep 17 17:10:53.449: INFO: Got endpoints: latency-svc-6vr7g [836.146417ms] Sep 17 17:10:53.472: INFO: Created: latency-svc-sflbj Sep 17 17:10:53.492: INFO: Got endpoints: latency-svc-sflbj [837.851535ms] Sep 17 17:10:53.546: INFO: Created: latency-svc-nmg88 Sep 17 17:10:53.549: INFO: Got endpoints: latency-svc-nmg88 [840.440701ms] Sep 17 17:10:53.568: INFO: Created: latency-svc-9wms5 Sep 17 17:10:53.588: INFO: Got endpoints: latency-svc-9wms5 [803.446713ms] Sep 17 17:10:53.623: INFO: Created: latency-svc-rsv5d Sep 17 17:10:53.690: INFO: Got endpoints: latency-svc-rsv5d [837.045251ms] Sep 17 17:10:53.691: INFO: Created: latency-svc-qksr5 Sep 17 17:10:53.696: INFO: Got endpoints: latency-svc-qksr5 [767.030152ms] Sep 17 17:10:53.719: INFO: Created: latency-svc-k8hkc Sep 17 17:10:53.733: INFO: Got endpoints: latency-svc-k8hkc [766.529049ms] Sep 17 17:10:53.754: INFO: Created: latency-svc-pkvt2 Sep 17 17:10:53.769: INFO: Got endpoints: latency-svc-pkvt2 [739.764028ms] Sep 17 17:10:53.827: INFO: Created: latency-svc-npctd Sep 17 17:10:53.830: INFO: Got endpoints: latency-svc-npctd [736.99311ms] Sep 17 17:10:53.857: INFO: Created: latency-svc-b7r4g Sep 17 17:10:53.871: INFO: Got endpoints: latency-svc-b7r4g [748.189919ms] Sep 17 17:10:53.899: INFO: Created: latency-svc-dm94m Sep 17 17:10:53.919: INFO: Got endpoints: latency-svc-dm94m [745.851216ms] Sep 17 17:10:53.971: INFO: Created: latency-svc-qlf8b Sep 17 17:10:53.975: INFO: Got endpoints: latency-svc-qlf8b [725.44322ms] Sep 17 17:10:54.000: INFO: Created: latency-svc-c6r69 Sep 17 17:10:54.016: INFO: Got endpoints: latency-svc-c6r69 [718.445235ms] Sep 17 17:10:54.043: INFO: Created: latency-svc-7vrqd Sep 17 17:10:54.057: INFO: Got endpoints: latency-svc-7vrqd [723.236127ms] Sep 17 17:10:54.103: INFO: Created: latency-svc-qlwf2 Sep 17 17:10:54.110: INFO: Got endpoints: latency-svc-qlwf2 [702.18932ms] Sep 17 17:10:54.133: INFO: Created: latency-svc-mlm8l Sep 17 17:10:54.140: INFO: Got endpoints: latency-svc-mlm8l [691.364279ms] Sep 17 17:10:54.163: INFO: Created: latency-svc-llxt6 Sep 17 17:10:54.177: INFO: Got endpoints: latency-svc-llxt6 [684.922763ms] Sep 17 17:10:54.178: INFO: Latencies: [62.504562ms 75.422972ms 135.354504ms 151.995929ms 200.025882ms 284.636886ms 291.059672ms 340.533509ms 377.144704ms 470.236093ms 484.637267ms 521.016206ms 581.045677ms 616.901527ms 684.922763ms 691.364279ms 702.18932ms 706.96355ms 718.445235ms 723.236127ms 725.44322ms 726.238828ms 733.305565ms 736.99311ms 739.764028ms 744.328238ms 745.851216ms 746.951517ms 748.189919ms 762.828712ms 765.533662ms 766.529049ms 767.030152ms 767.250167ms 771.507691ms 778.189391ms 782.885912ms 786.82894ms 792.347035ms 802.483265ms 803.446713ms 805.10399ms 812.042558ms 812.974879ms 815.067125ms 819.414775ms 824.689066ms 825.215147ms 828.989658ms 829.636888ms 836.146417ms 837.045251ms 837.851535ms 839.157252ms 839.393164ms 840.440701ms 843.6169ms 845.069236ms 846.785559ms 846.93526ms 848.048522ms 848.525756ms 848.958995ms 851.286699ms 851.913615ms 856.750903ms 857.172903ms 864.791573ms 865.611348ms 866.440991ms 869.22634ms 869.961561ms 870.521512ms 872.101555ms 872.356135ms 876.452789ms 877.932662ms 878.978015ms 879.213835ms 879.564521ms 879.677733ms 889.082375ms 890.88574ms 891.058665ms 892.984463ms 893.240001ms 894.140803ms 895.164351ms 897.775539ms 898.939166ms 900.655522ms 902.254945ms 902.460761ms 902.757843ms 905.860083ms 912.211697ms 919.315661ms 922.061588ms 924.103737ms 925.273164ms 928.357424ms 931.261223ms 934.564096ms 934.879275ms 937.933525ms 941.486685ms 945.479463ms 950.816937ms 953.301335ms 956.49894ms 960.371294ms 961.038634ms 963.634813ms 972.973428ms 975.131703ms 976.792005ms 977.25213ms 980.170244ms 982.09652ms 985.676726ms 985.916452ms 988.239003ms 988.554258ms 990.051983ms 990.162391ms 991.307411ms 1.000037558s 1.002918509s 1.00451983s 1.004585465s 1.007581685s 1.010044765s 1.010116431s 1.01075006s 1.011461896s 1.01601881s 1.018104428s 1.023289842s 1.027635069s 1.028379303s 1.035864313s 1.038378944s 1.039891767s 1.040941383s 1.044493086s 1.045476764s 1.045512134s 1.048198099s 1.050721584s 1.053517636s 1.055054231s 1.058616039s 1.064083213s 1.064313419s 1.064740247s 1.067579864s 1.069367333s 1.070775927s 1.076947431s 1.077992055s 1.078606072s 1.082585322s 1.09136876s 1.097779406s 1.098730203s 1.113776878s 1.118017214s 1.12615008s 1.127741074s 1.129416227s 1.13096275s 1.131934013s 1.132972025s 1.15016009s 1.152052878s 1.15875622s 1.15983149s 1.164694777s 1.167092919s 1.174549901s 1.175476039s 1.178666066s 1.193719488s 1.209715782s 1.210045315s 1.214313373s 1.224680147s 1.225727291s 1.234327792s 1.234480318s 1.241534161s 1.242382584s 1.272758236s 1.280894683s 1.287215246s 1.333167242s 1.342619189s 1.356213678s 1.367975382s 1.380896953s] Sep 17 17:10:54.180: INFO: 50 %ile: 928.357424ms Sep 17 17:10:54.180: INFO: 90 %ile: 1.175476039s Sep 17 17:10:54.181: INFO: 99 %ile: 1.367975382s Sep 17 17:10:54.181: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:10:54.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2729" for this suite. • [SLOW TEST:15.991 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":117,"skipped":1904,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:10:54.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 17 17:10:58.401: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:10:58.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2402" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1911,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:10:58.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Sep 17 17:10:58.516: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:10:59.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4065" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":119,"skipped":1928,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:10:59.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 17:11:08.077: INFO: DNS probes using dns-test-c8491271-795b-48f1-bbda-23bdc9b4f805 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 17:11:16.741: INFO: File wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:16.752: INFO: File jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:16.752: INFO: Lookups using dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e failed for: [wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local] Sep 17 17:11:21.760: INFO: File wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:21.772: INFO: File jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:21.772: INFO: Lookups using dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e failed for: [wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local] Sep 17 17:11:26.760: INFO: File wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:26.765: INFO: File jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:26.765: INFO: Lookups using dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e failed for: [wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local] Sep 17 17:11:31.761: INFO: File wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:31.766: INFO: File jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local from pod dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 17 17:11:31.766: INFO: Lookups using dns-4476/dns-test-ca314f43-2454-4475-9e4d-22bcb220959e failed for: [wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local] Sep 17 17:11:36.764: INFO: DNS probes using dns-test-ca314f43-2454-4475-9e4d-22bcb220959e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4476.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4476.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 17:11:45.452: INFO: DNS probes using dns-test-08a13717-1d38-4d1c-95d9-419d9d416200 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:11:45.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4476" for this suite. • [SLOW TEST:46.003 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":120,"skipped":1931,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:11:45.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Sep 17 17:11:46.010: INFO: Waiting up to 5m0s for pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f" in namespace "containers-5078" to be "success or failure" Sep 17 17:11:46.065: INFO: Pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.177309ms Sep 17 17:11:48.072: INFO: Pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06177615s Sep 17 17:11:50.079: INFO: Pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f": Phase="Running", Reason="", readiness=true. Elapsed: 4.068572766s Sep 17 17:11:52.086: INFO: Pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075422483s STEP: Saw pod success Sep 17 17:11:52.086: INFO: Pod "client-containers-2c7f846a-d9c1-40db-a20b-85188458716f" satisfied condition "success or failure" Sep 17 17:11:52.091: INFO: Trying to get logs from node jerma-worker pod client-containers-2c7f846a-d9c1-40db-a20b-85188458716f container test-container: STEP: delete the pod Sep 17 17:11:52.134: INFO: Waiting for pod client-containers-2c7f846a-d9c1-40db-a20b-85188458716f to disappear Sep 17 17:11:52.138: INFO: Pod client-containers-2c7f846a-d9c1-40db-a20b-85188458716f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:11:52.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5078" for this suite. • [SLOW TEST:6.549 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1936,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:11:52.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Sep 17 17:11:52.241: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:13:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1900" for this suite. • [SLOW TEST:90.271 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":122,"skipped":1940,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:13:22.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9xg5 STEP: Creating a pod to test atomic-volume-subpath Sep 17 17:13:22.520: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9xg5" in namespace "subpath-1918" to be "success or failure" Sep 17 17:13:22.525: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996375ms Sep 17 17:13:24.532: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011917989s Sep 17 17:13:26.539: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 4.019046182s Sep 17 17:13:28.547: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 6.026226726s Sep 17 17:13:30.553: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 8.032883855s Sep 17 17:13:32.560: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 10.039490206s Sep 17 17:13:34.567: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 12.04608776s Sep 17 17:13:36.573: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 14.052948029s Sep 17 17:13:38.580: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 16.060029363s Sep 17 17:13:40.587: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 18.066740276s Sep 17 17:13:42.594: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 20.073668623s Sep 17 17:13:44.601: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Running", Reason="", readiness=true. Elapsed: 22.080524179s Sep 17 17:13:46.608: INFO: Pod "pod-subpath-test-configmap-9xg5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088040044s STEP: Saw pod success Sep 17 17:13:46.609: INFO: Pod "pod-subpath-test-configmap-9xg5" satisfied condition "success or failure" Sep 17 17:13:46.615: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-9xg5 container test-container-subpath-configmap-9xg5: STEP: delete the pod Sep 17 17:13:46.776: INFO: Waiting for pod pod-subpath-test-configmap-9xg5 to disappear Sep 17 17:13:46.787: INFO: Pod pod-subpath-test-configmap-9xg5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-9xg5 Sep 17 17:13:46.787: INFO: Deleting pod "pod-subpath-test-configmap-9xg5" in namespace "subpath-1918" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:13:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1918" for this suite. • [SLOW TEST:24.380 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":123,"skipped":1943,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:13:46.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-00a34911-a48b-4fab-9fb6-2f222820134d STEP: Creating a pod to test consume secrets Sep 17 17:13:47.011: INFO: Waiting up to 5m0s for pod "pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9" in namespace "secrets-4999" to be "success or failure" Sep 17 17:13:47.058: INFO: Pod "pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 46.459677ms Sep 17 17:13:49.065: INFO: Pod "pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053625271s Sep 17 17:13:51.072: INFO: Pod "pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060438331s STEP: Saw pod success Sep 17 17:13:51.072: INFO: Pod "pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9" satisfied condition "success or failure" Sep 17 17:13:51.078: INFO: Trying to get logs from node jerma-worker pod pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9 container secret-volume-test: STEP: delete the pod Sep 17 17:13:51.099: INFO: Waiting for pod pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9 to disappear Sep 17 17:13:51.104: INFO: Pod pod-secrets-fba378b1-5945-4db0-9155-2355ed1bf9e9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:13:51.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4999" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1945,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:13:51.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 17:13:57.351: INFO: DNS probes using dns-2304/dns-test-2a9dd376-af9e-4c9c-b528-dc8b263669f4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:13:57.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2304" for this suite. • [SLOW TEST:6.348 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":125,"skipped":1964,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:13:57.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-555ddb86-dba3-4938-b1b9-481c02eb1ceb STEP: Creating a pod to test consume secrets Sep 17 17:13:57.878: INFO: Waiting up to 5m0s for pod "pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08" in namespace "secrets-3298" to be "success or failure" Sep 17 17:13:57.891: INFO: Pod "pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08": Phase="Pending", Reason="", readiness=false. Elapsed: 12.591586ms Sep 17 17:13:59.943: INFO: Pod "pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063961601s Sep 17 17:14:01.950: INFO: Pod "pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071132924s STEP: Saw pod success Sep 17 17:14:01.950: INFO: Pod "pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08" satisfied condition "success or failure" Sep 17 17:14:01.955: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08 container secret-volume-test: STEP: delete the pod Sep 17 17:14:02.150: INFO: Waiting for pod pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08 to disappear Sep 17 17:14:02.155: INFO: Pod pod-secrets-d571b7ed-3b98-406c-8bfc-df084b059e08 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:14:02.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3298" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1968,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:14:02.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Sep 17 17:14:02.296: INFO: Waiting up to 5m0s for pod "downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7" in namespace "downward-api-9394" to be "success or failure" Sep 17 17:14:02.368: INFO: Pod "downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 72.002659ms Sep 17 17:14:04.374: INFO: Pod "downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078122326s Sep 17 17:14:06.410: INFO: Pod "downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113914598s STEP: Saw pod success Sep 17 17:14:06.410: INFO: Pod "downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7" satisfied condition "success or failure" Sep 17 17:14:06.414: INFO: Trying to get logs from node jerma-worker pod downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7 container dapi-container: STEP: delete the pod Sep 17 17:14:06.771: INFO: Waiting for pod downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7 to disappear Sep 17 17:14:06.787: INFO: Pod downward-api-c20e1fd8-a7bf-4dd0-9ed4-7f09757ae2a7 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:14:06.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9394" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:14:06.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-df9h STEP: Creating a pod to test atomic-volume-subpath Sep 17 17:14:06.927: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-df9h" in namespace "subpath-3162" to be "success or failure" Sep 17 17:14:06.961: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Pending", Reason="", readiness=false. Elapsed: 33.097906ms Sep 17 17:14:08.973: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045688381s Sep 17 17:14:10.981: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 4.053609368s Sep 17 17:14:12.988: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 6.060567435s Sep 17 17:14:14.996: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 8.067946748s Sep 17 17:14:17.002: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 10.074598391s Sep 17 17:14:19.009: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 12.081671876s Sep 17 17:14:21.017: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 14.089443147s Sep 17 17:14:23.024: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 16.096138988s Sep 17 17:14:25.031: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 18.10304499s Sep 17 17:14:27.038: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 20.110387451s Sep 17 17:14:29.045: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Running", Reason="", readiness=true. Elapsed: 22.117646176s Sep 17 17:14:31.052: INFO: Pod "pod-subpath-test-downwardapi-df9h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124179127s STEP: Saw pod success Sep 17 17:14:31.052: INFO: Pod "pod-subpath-test-downwardapi-df9h" satisfied condition "success or failure" Sep 17 17:14:31.057: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-df9h container test-container-subpath-downwardapi-df9h: STEP: delete the pod Sep 17 17:14:31.090: INFO: Waiting for pod pod-subpath-test-downwardapi-df9h to disappear Sep 17 17:14:31.125: INFO: Pod pod-subpath-test-downwardapi-df9h no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-df9h Sep 17 17:14:31.125: INFO: Deleting pod "pod-subpath-test-downwardapi-df9h" in namespace "subpath-3162" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:14:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3162" for this suite. • [SLOW TEST:24.457 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":128,"skipped":2013,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:14:31.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-425 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-425 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-425 Sep 17 17:14:31.362: INFO: Found 0 stateful pods, waiting for 1 Sep 17 17:14:41.370: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 17 17:14:41.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:14:42.796: INFO: stderr: "I0917 17:14:42.624534 1999 log.go:172] (0x2906230) (0x29062a0) Create stream\nI0917 17:14:42.626847 1999 log.go:172] (0x2906230) (0x29062a0) Stream added, broadcasting: 1\nI0917 17:14:42.638456 1999 log.go:172] (0x2906230) Reply frame received for 1\nI0917 17:14:42.642329 1999 log.go:172] (0x2906230) (0x2992070) Create stream\nI0917 17:14:42.642966 1999 log.go:172] (0x2906230) (0x2992070) Stream added, broadcasting: 3\nI0917 17:14:42.649832 1999 log.go:172] (0x2906230) Reply frame received for 3\nI0917 17:14:42.650346 1999 log.go:172] (0x2906230) (0x29921c0) Create stream\nI0917 17:14:42.650461 1999 log.go:172] (0x2906230) (0x29921c0) Stream added, broadcasting: 5\nI0917 17:14:42.651841 1999 log.go:172] (0x2906230) Reply frame received for 5\nI0917 17:14:42.744853 1999 log.go:172] (0x2906230) Data frame received for 5\nI0917 17:14:42.745235 1999 log.go:172] (0x29921c0) (5) Data frame handling\nI0917 17:14:42.745903 1999 log.go:172] (0x29921c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:14:42.778426 1999 log.go:172] (0x2906230) Data frame received for 3\nI0917 17:14:42.778575 1999 log.go:172] (0x2992070) (3) Data frame handling\nI0917 17:14:42.778738 1999 log.go:172] (0x2906230) Data frame received for 5\nI0917 17:14:42.778991 1999 log.go:172] (0x29921c0) (5) Data frame handling\nI0917 17:14:42.779214 1999 log.go:172] (0x2992070) (3) Data frame sent\nI0917 17:14:42.779343 1999 log.go:172] (0x2906230) Data frame received for 3\nI0917 17:14:42.779452 1999 log.go:172] (0x2992070) (3) Data frame handling\nI0917 17:14:42.780675 1999 log.go:172] (0x2906230) Data frame received for 1\nI0917 17:14:42.780871 1999 log.go:172] (0x29062a0) (1) Data frame handling\nI0917 17:14:42.781053 1999 log.go:172] (0x29062a0) (1) Data frame sent\nI0917 17:14:42.782815 1999 log.go:172] (0x2906230) (0x29062a0) Stream removed, broadcasting: 1\nI0917 17:14:42.784386 1999 log.go:172] (0x2906230) Go away received\nI0917 17:14:42.786204 1999 log.go:172] (0x2906230) (0x29062a0) Stream removed, broadcasting: 1\nI0917 17:14:42.786803 1999 log.go:172] (0x2906230) (0x2992070) Stream removed, broadcasting: 3\nI0917 17:14:42.787293 1999 log.go:172] (0x2906230) (0x29921c0) Stream removed, broadcasting: 5\n" Sep 17 17:14:42.796: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:14:42.796: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 17:14:42.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 17 17:14:52.810: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 17 17:14:52.811: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:14:52.867: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:14:52.869: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:14:52.870: INFO: Sep 17 17:14:52.870: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 17 17:14:53.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.955592007s Sep 17 17:14:55.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947523163s Sep 17 17:14:56.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.665216033s Sep 17 17:14:57.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.652402811s Sep 17 17:14:58.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.605694896s Sep 17 17:14:59.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.581404943s Sep 17 17:15:00.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.572068814s Sep 17 17:15:01.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.563295149s Sep 17 17:15:02.280: INFO: Verifying statefulset ss doesn't scale past 3 for another 555.07676ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-425 Sep 17 17:15:03.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 17:15:04.661: INFO: stderr: "I0917 17:15:04.550299 2023 log.go:172] (0x25c6540) (0x25c72d0) Create stream\nI0917 17:15:04.552464 2023 log.go:172] (0x25c6540) (0x25c72d0) Stream added, broadcasting: 1\nI0917 17:15:04.562544 2023 log.go:172] (0x25c6540) Reply frame received for 1\nI0917 17:15:04.563393 2023 log.go:172] (0x25c6540) (0x25c77a0) Create stream\nI0917 17:15:04.563490 2023 log.go:172] (0x25c6540) (0x25c77a0) Stream added, broadcasting: 3\nI0917 17:15:04.565136 2023 log.go:172] (0x25c6540) Reply frame received for 3\nI0917 17:15:04.565328 2023 log.go:172] (0x25c6540) (0x28ec070) Create stream\nI0917 17:15:04.565382 2023 log.go:172] (0x25c6540) (0x28ec070) Stream added, broadcasting: 5\nI0917 17:15:04.566649 2023 log.go:172] (0x25c6540) Reply frame received for 5\nI0917 17:15:04.644485 2023 log.go:172] (0x25c6540) Data frame received for 5\nI0917 17:15:04.644830 2023 log.go:172] (0x25c6540) Data frame received for 3\nI0917 17:15:04.645060 2023 log.go:172] (0x25c6540) Data frame received for 1\nI0917 17:15:04.645241 2023 log.go:172] (0x25c72d0) (1) Data frame handling\nI0917 17:15:04.645800 2023 log.go:172] (0x25c77a0) (3) Data frame handling\nI0917 17:15:04.646076 2023 log.go:172] (0x28ec070) (5) Data frame handling\nI0917 17:15:04.646701 2023 log.go:172] (0x25c72d0) (1) Data frame sent\nI0917 17:15:04.646825 2023 log.go:172] (0x25c77a0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0917 17:15:04.648233 2023 log.go:172] (0x28ec070) (5) Data frame sent\nI0917 17:15:04.648518 2023 log.go:172] (0x25c6540) Data frame received for 3\nI0917 17:15:04.648960 2023 log.go:172] (0x25c6540) Data frame received for 5\nI0917 17:15:04.649394 2023 log.go:172] (0x25c6540) (0x25c72d0) Stream removed, broadcasting: 1\nI0917 17:15:04.649972 2023 log.go:172] (0x28ec070) (5) Data frame handling\nI0917 17:15:04.650247 2023 log.go:172] (0x25c77a0) (3) Data frame handling\nI0917 17:15:04.650793 2023 log.go:172] (0x25c6540) Go away received\nI0917 17:15:04.653232 2023 log.go:172] (0x25c6540) (0x25c72d0) Stream removed, broadcasting: 1\nI0917 17:15:04.653428 2023 log.go:172] (0x25c6540) (0x25c77a0) Stream removed, broadcasting: 3\nI0917 17:15:04.653780 2023 log.go:172] (0x25c6540) (0x28ec070) Stream removed, broadcasting: 5\n" Sep 17 17:15:04.662: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 17:15:04.662: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 17:15:04.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 17:15:06.059: INFO: stderr: "I0917 17:15:05.923801 2047 log.go:172] (0x28da000) (0x28da070) Create stream\nI0917 17:15:05.928459 2047 log.go:172] (0x28da000) (0x28da070) Stream added, broadcasting: 1\nI0917 17:15:05.948477 2047 log.go:172] (0x28da000) Reply frame received for 1\nI0917 17:15:05.949057 2047 log.go:172] (0x28da000) (0x25127e0) Create stream\nI0917 17:15:05.949145 2047 log.go:172] (0x28da000) (0x25127e0) Stream added, broadcasting: 3\nI0917 17:15:05.950484 2047 log.go:172] (0x28da000) Reply frame received for 3\nI0917 17:15:05.950704 2047 log.go:172] (0x28da000) (0x24a81c0) Create stream\nI0917 17:15:05.950778 2047 log.go:172] (0x28da000) (0x24a81c0) Stream added, broadcasting: 5\nI0917 17:15:05.951822 2047 log.go:172] (0x28da000) Reply frame received for 5\nI0917 17:15:06.037699 2047 log.go:172] (0x28da000) Data frame received for 5\nI0917 17:15:06.038078 2047 log.go:172] (0x28da000) Data frame received for 3\nI0917 17:15:06.038328 2047 log.go:172] (0x25127e0) (3) Data frame handling\nI0917 17:15:06.038452 2047 log.go:172] (0x24a81c0) (5) Data frame handling\nI0917 17:15:06.038762 2047 log.go:172] (0x28da000) Data frame received for 1\nI0917 17:15:06.038859 2047 log.go:172] (0x28da070) (1) Data frame handling\nI0917 17:15:06.039936 2047 log.go:172] (0x24a81c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0917 17:15:06.040554 2047 log.go:172] (0x25127e0) (3) Data frame sent\nI0917 17:15:06.041040 2047 log.go:172] (0x28da070) (1) Data frame sent\nI0917 17:15:06.041182 2047 log.go:172] (0x28da000) Data frame received for 5\nI0917 17:15:06.041319 2047 log.go:172] (0x24a81c0) (5) Data frame handling\nI0917 17:15:06.041425 2047 log.go:172] (0x28da000) Data frame received for 3\nI0917 17:15:06.041653 2047 log.go:172] (0x25127e0) (3) Data frame handling\nI0917 17:15:06.043753 2047 log.go:172] (0x28da000) (0x28da070) Stream removed, broadcasting: 1\nI0917 17:15:06.046693 2047 log.go:172] (0x28da000) Go away received\nI0917 17:15:06.050002 2047 log.go:172] (0x28da000) (0x28da070) Stream removed, broadcasting: 1\nI0917 17:15:06.050550 2047 log.go:172] (0x28da000) (0x25127e0) Stream removed, broadcasting: 3\nI0917 17:15:06.050810 2047 log.go:172] (0x28da000) (0x24a81c0) Stream removed, broadcasting: 5\n" Sep 17 17:15:06.060: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 17:15:06.060: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 17:15:06.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 17 17:15:07.386: INFO: stderr: "I0917 17:15:07.308874 2072 log.go:172] (0x29b6000) (0x29b6070) Create stream\nI0917 17:15:07.312254 2072 log.go:172] (0x29b6000) (0x29b6070) Stream added, broadcasting: 1\nI0917 17:15:07.327949 2072 log.go:172] (0x29b6000) Reply frame received for 1\nI0917 17:15:07.328448 2072 log.go:172] (0x29b6000) (0x26fbd50) Create stream\nI0917 17:15:07.328521 2072 log.go:172] (0x29b6000) (0x26fbd50) Stream added, broadcasting: 3\nI0917 17:15:07.329846 2072 log.go:172] (0x29b6000) Reply frame received for 3\nI0917 17:15:07.330134 2072 log.go:172] (0x29b6000) (0x27f8070) Create stream\nI0917 17:15:07.330214 2072 log.go:172] (0x29b6000) (0x27f8070) Stream added, broadcasting: 5\nI0917 17:15:07.331359 2072 log.go:172] (0x29b6000) Reply frame received for 5\nI0917 17:15:07.368355 2072 log.go:172] (0x29b6000) Data frame received for 5\nI0917 17:15:07.368877 2072 log.go:172] (0x29b6000) Data frame received for 3\nI0917 17:15:07.369106 2072 log.go:172] (0x26fbd50) (3) Data frame handling\nI0917 17:15:07.369406 2072 log.go:172] (0x27f8070) (5) Data frame handling\nI0917 17:15:07.369654 2072 log.go:172] (0x29b6000) Data frame received for 1\nI0917 17:15:07.369782 2072 log.go:172] (0x29b6070) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0917 17:15:07.370745 2072 log.go:172] (0x29b6070) (1) Data frame sent\nI0917 17:15:07.370878 2072 log.go:172] (0x26fbd50) (3) Data frame sent\nI0917 17:15:07.371017 2072 log.go:172] (0x27f8070) (5) Data frame sent\nI0917 17:15:07.371214 2072 log.go:172] (0x29b6000) Data frame received for 5\nI0917 17:15:07.371375 2072 log.go:172] (0x27f8070) (5) Data frame handling\nI0917 17:15:07.371562 2072 log.go:172] (0x29b6000) Data frame received for 3\nI0917 17:15:07.371784 2072 log.go:172] (0x26fbd50) (3) Data frame handling\nI0917 17:15:07.372471 2072 log.go:172] (0x29b6000) (0x29b6070) Stream removed, broadcasting: 1\nI0917 17:15:07.376805 2072 log.go:172] (0x29b6000) Go away received\nI0917 17:15:07.378123 2072 log.go:172] (0x29b6000) (0x29b6070) Stream removed, broadcasting: 1\nI0917 17:15:07.378540 2072 log.go:172] (0x29b6000) (0x26fbd50) Stream removed, broadcasting: 3\nI0917 17:15:07.378994 2072 log.go:172] (0x29b6000) (0x27f8070) Stream removed, broadcasting: 5\n" Sep 17 17:15:07.387: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 17 17:15:07.388: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 17 17:15:07.394: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:15:07.394: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:15:07.395: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 17 17:15:07.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:15:08.889: INFO: stderr: "I0917 17:15:08.755687 2096 log.go:172] (0x2bda070) (0x2bda150) Create stream\nI0917 17:15:08.758115 2096 log.go:172] (0x2bda070) (0x2bda150) Stream added, broadcasting: 1\nI0917 17:15:08.769247 2096 log.go:172] (0x2bda070) Reply frame received for 1\nI0917 17:15:08.770072 2096 log.go:172] (0x2bda070) (0x2878070) Create stream\nI0917 17:15:08.770169 2096 log.go:172] (0x2bda070) (0x2878070) Stream added, broadcasting: 3\nI0917 17:15:08.772600 2096 log.go:172] (0x2bda070) Reply frame received for 3\nI0917 17:15:08.773224 2096 log.go:172] (0x2bda070) (0x28fa070) Create stream\nI0917 17:15:08.773402 2096 log.go:172] (0x2bda070) (0x28fa070) Stream added, broadcasting: 5\nI0917 17:15:08.775554 2096 log.go:172] (0x2bda070) Reply frame received for 5\nI0917 17:15:08.870450 2096 log.go:172] (0x2bda070) Data frame received for 3\nI0917 17:15:08.870754 2096 log.go:172] (0x2bda070) Data frame received for 1\nI0917 17:15:08.871025 2096 log.go:172] (0x2bda150) (1) Data frame handling\nI0917 17:15:08.871186 2096 log.go:172] (0x2878070) (3) Data frame handling\nI0917 17:15:08.871605 2096 log.go:172] (0x2bda070) Data frame received for 5\nI0917 17:15:08.871787 2096 log.go:172] (0x28fa070) (5) Data frame handling\nI0917 17:15:08.872039 2096 log.go:172] (0x28fa070) (5) Data frame sent\nI0917 17:15:08.872322 2096 log.go:172] (0x2bda150) (1) Data frame sent\nI0917 17:15:08.872580 2096 log.go:172] (0x2878070) (3) Data frame sent\nI0917 17:15:08.872716 2096 log.go:172] (0x2bda070) Data frame received for 3\nI0917 17:15:08.872820 2096 log.go:172] (0x2878070) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:15:08.873334 2096 log.go:172] (0x2bda070) Data frame received for 5\nI0917 17:15:08.873459 2096 log.go:172] (0x28fa070) (5) Data frame handling\nI0917 17:15:08.874997 2096 log.go:172] (0x2bda070) (0x2bda150) Stream removed, broadcasting: 1\nI0917 17:15:08.877083 2096 log.go:172] (0x2bda070) Go away received\nI0917 17:15:08.880060 2096 log.go:172] (0x2bda070) (0x2bda150) Stream removed, broadcasting: 1\nI0917 17:15:08.880786 2096 log.go:172] (0x2bda070) (0x2878070) Stream removed, broadcasting: 3\nI0917 17:15:08.881074 2096 log.go:172] (0x2bda070) (0x28fa070) Stream removed, broadcasting: 5\n" Sep 17 17:15:08.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:15:08.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 17:15:08.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:15:10.322: INFO: stderr: "I0917 17:15:10.122422 2118 log.go:172] (0x29181c0) (0x2918230) Create stream\nI0917 17:15:10.127870 2118 log.go:172] (0x29181c0) (0x2918230) Stream added, broadcasting: 1\nI0917 17:15:10.144405 2118 log.go:172] (0x29181c0) Reply frame received for 1\nI0917 17:15:10.144908 2118 log.go:172] (0x29181c0) (0x26a62a0) Create stream\nI0917 17:15:10.144979 2118 log.go:172] (0x29181c0) (0x26a62a0) Stream added, broadcasting: 3\nI0917 17:15:10.146630 2118 log.go:172] (0x29181c0) Reply frame received for 3\nI0917 17:15:10.146887 2118 log.go:172] (0x29181c0) (0x2c38070) Create stream\nI0917 17:15:10.146956 2118 log.go:172] (0x29181c0) (0x2c38070) Stream added, broadcasting: 5\nI0917 17:15:10.148200 2118 log.go:172] (0x29181c0) Reply frame received for 5\nI0917 17:15:10.227962 2118 log.go:172] (0x29181c0) Data frame received for 5\nI0917 17:15:10.228322 2118 log.go:172] (0x2c38070) (5) Data frame handling\nI0917 17:15:10.228813 2118 log.go:172] (0x2c38070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:15:10.301948 2118 log.go:172] (0x29181c0) Data frame received for 3\nI0917 17:15:10.302154 2118 log.go:172] (0x26a62a0) (3) Data frame handling\nI0917 17:15:10.302293 2118 log.go:172] (0x26a62a0) (3) Data frame sent\nI0917 17:15:10.302405 2118 log.go:172] (0x29181c0) Data frame received for 3\nI0917 17:15:10.302523 2118 log.go:172] (0x29181c0) Data frame received for 5\nI0917 17:15:10.302738 2118 log.go:172] (0x2c38070) (5) Data frame handling\nI0917 17:15:10.302923 2118 log.go:172] (0x26a62a0) (3) Data frame handling\nI0917 17:15:10.304238 2118 log.go:172] (0x29181c0) Data frame received for 1\nI0917 17:15:10.304452 2118 log.go:172] (0x2918230) (1) Data frame handling\nI0917 17:15:10.304754 2118 log.go:172] (0x2918230) (1) Data frame sent\nI0917 17:15:10.309055 2118 log.go:172] (0x29181c0) (0x2918230) Stream removed, broadcasting: 1\nI0917 17:15:10.310771 2118 log.go:172] (0x29181c0) Go away received\nI0917 17:15:10.314495 2118 log.go:172] (0x29181c0) (0x2918230) Stream removed, broadcasting: 1\nI0917 17:15:10.314839 2118 log.go:172] (0x29181c0) (0x26a62a0) Stream removed, broadcasting: 3\nI0917 17:15:10.315071 2118 log.go:172] (0x29181c0) (0x2c38070) Stream removed, broadcasting: 5\n" Sep 17 17:15:10.322: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:15:10.322: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 17:15:10.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-425 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 17 17:15:11.710: INFO: stderr: "I0917 17:15:11.558757 2142 log.go:172] (0x28405b0) (0x2840620) Create stream\nI0917 17:15:11.560706 2142 log.go:172] (0x28405b0) (0x2840620) Stream added, broadcasting: 1\nI0917 17:15:11.575478 2142 log.go:172] (0x28405b0) Reply frame received for 1\nI0917 17:15:11.576515 2142 log.go:172] (0x28405b0) (0x24a2380) Create stream\nI0917 17:15:11.576605 2142 log.go:172] (0x28405b0) (0x24a2380) Stream added, broadcasting: 3\nI0917 17:15:11.578038 2142 log.go:172] (0x28405b0) Reply frame received for 3\nI0917 17:15:11.578260 2142 log.go:172] (0x28405b0) (0x24a3030) Create stream\nI0917 17:15:11.578324 2142 log.go:172] (0x28405b0) (0x24a3030) Stream added, broadcasting: 5\nI0917 17:15:11.579307 2142 log.go:172] (0x28405b0) Reply frame received for 5\nI0917 17:15:11.661283 2142 log.go:172] (0x28405b0) Data frame received for 5\nI0917 17:15:11.661690 2142 log.go:172] (0x24a3030) (5) Data frame handling\nI0917 17:15:11.662541 2142 log.go:172] (0x24a3030) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0917 17:15:11.691947 2142 log.go:172] (0x28405b0) Data frame received for 3\nI0917 17:15:11.692240 2142 log.go:172] (0x24a2380) (3) Data frame handling\nI0917 17:15:11.692529 2142 log.go:172] (0x28405b0) Data frame received for 5\nI0917 17:15:11.692768 2142 log.go:172] (0x24a3030) (5) Data frame handling\nI0917 17:15:11.693035 2142 log.go:172] (0x24a2380) (3) Data frame sent\nI0917 17:15:11.693167 2142 log.go:172] (0x28405b0) Data frame received for 3\nI0917 17:15:11.693268 2142 log.go:172] (0x24a2380) (3) Data frame handling\nI0917 17:15:11.693638 2142 log.go:172] (0x28405b0) Data frame received for 1\nI0917 17:15:11.693788 2142 log.go:172] (0x2840620) (1) Data frame handling\nI0917 17:15:11.693948 2142 log.go:172] (0x2840620) (1) Data frame sent\nI0917 17:15:11.695736 2142 log.go:172] (0x28405b0) (0x2840620) Stream removed, broadcasting: 1\nI0917 17:15:11.698108 2142 log.go:172] (0x28405b0) Go away received\nI0917 17:15:11.701778 2142 log.go:172] (0x28405b0) (0x2840620) Stream removed, broadcasting: 1\nI0917 17:15:11.702069 2142 log.go:172] (0x28405b0) (0x24a2380) Stream removed, broadcasting: 3\nI0917 17:15:11.702300 2142 log.go:172] (0x28405b0) (0x24a3030) Stream removed, broadcasting: 5\n" Sep 17 17:15:11.711: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 17 17:15:11.711: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 17 17:15:11.711: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:15:11.717: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 17 17:15:21.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 17 17:15:21.742: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 17 17:15:21.742: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 17 17:15:21.764: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:21.764: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:21.765: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:21.765: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:21.765: INFO: Sep 17 17:15:21.766: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:22.773: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:22.773: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:22.773: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:22.774: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:22.774: INFO: Sep 17 17:15:22.774: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:23.783: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:23.783: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:23.784: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:23.784: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:23.785: INFO: Sep 17 17:15:23.785: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:24.792: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:24.793: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:24.793: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:24.793: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:24.794: INFO: Sep 17 17:15:24.794: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:25.802: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:25.802: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:25.803: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:25.804: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:25.804: INFO: Sep 17 17:15:25.805: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:26.812: INFO: POD NODE PHASE GRACE CONDITIONS Sep 17 17:15:26.812: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:31 +0000 UTC }] Sep 17 17:15:26.813: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:26.813: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:15:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-17 17:14:52 +0000 UTC }] Sep 17 17:15:26.814: INFO: Sep 17 17:15:26.814: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 17 17:15:27.819: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.934756254s Sep 17 17:15:28.826: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.928839736s Sep 17 17:15:29.832: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.922468069s Sep 17 17:15:30.839: INFO: Verifying statefulset ss doesn't scale past 0 for another 915.866679ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-425 Sep 17 17:15:31.846: INFO: Scaling statefulset ss to 0 Sep 17 17:15:31.859: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 17:15:31.863: INFO: Deleting all statefulset in ns statefulset-425 Sep 17 17:15:31.867: INFO: Scaling statefulset ss to 0 Sep 17 17:15:31.879: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:15:31.882: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:15:31.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-425" for this suite. • [SLOW TEST:60.662 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":129,"skipped":2023,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:15:31.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 17:15:32.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0" in namespace "downward-api-5204" to be "success or failure" Sep 17 17:15:32.030: INFO: Pod "downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.439916ms Sep 17 17:15:34.118: INFO: Pod "downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093672822s Sep 17 17:15:36.125: INFO: Pod "downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100196791s STEP: Saw pod success Sep 17 17:15:36.125: INFO: Pod "downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0" satisfied condition "success or failure" Sep 17 17:15:36.220: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0 container client-container: STEP: delete the pod Sep 17 17:15:38.601: INFO: Waiting for pod downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0 to disappear Sep 17 17:15:38.689: INFO: Pod downwardapi-volume-96816329-12e3-4a9c-965d-a2a302f7b7e0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:15:38.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5204" for this suite. • [SLOW TEST:6.877 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2029,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:15:38.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Sep 17 17:15:38.871: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:17:26.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9383" for this suite. • [SLOW TEST:108.042 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":131,"skipped":2038,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:17:26.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9033, will wait for the garbage collector to delete the pods Sep 17 17:17:32.995: INFO: Deleting Job.batch foo took: 8.541605ms Sep 17 17:17:33.396: INFO: Terminating Job.batch foo pods took: 401.063673ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:18:06.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9033" for this suite. • [SLOW TEST:39.370 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":132,"skipped":2041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:18:06.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 17:18:06.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8" in namespace "downward-api-2942" to be "success or failure" Sep 17 17:18:06.343: INFO: Pod "downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.163499ms Sep 17 17:18:08.350: INFO: Pod "downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028116844s Sep 17 17:18:10.357: INFO: Pod "downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034601882s STEP: Saw pod success Sep 17 17:18:10.357: INFO: Pod "downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8" satisfied condition "success or failure" Sep 17 17:18:10.362: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8 container client-container: STEP: delete the pod Sep 17 17:18:10.443: INFO: Waiting for pod downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8 to disappear Sep 17 17:18:10.450: INFO: Pod downwardapi-volume-489f6288-300e-49df-8eed-17b6120ea1d8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:18:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2942" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2066,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:18:10.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:18:10.577: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 17 17:18:10.588: INFO: Number of nodes with available pods: 0 Sep 17 17:18:10.588: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 17 17:18:10.650: INFO: Number of nodes with available pods: 0 Sep 17 17:18:10.651: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:11.679: INFO: Number of nodes with available pods: 0 Sep 17 17:18:11.680: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:12.797: INFO: Number of nodes with available pods: 0 Sep 17 17:18:12.797: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:13.680: INFO: Number of nodes with available pods: 0 Sep 17 17:18:13.680: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:14.657: INFO: Number of nodes with available pods: 0 Sep 17 17:18:14.657: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:15.657: INFO: Number of nodes with available pods: 1 Sep 17 17:18:15.657: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 17 17:18:15.748: INFO: Number of nodes with available pods: 1 Sep 17 17:18:15.749: INFO: Number of running nodes: 0, number of available pods: 1 Sep 17 17:18:16.756: INFO: Number of nodes with available pods: 0 Sep 17 17:18:16.756: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 17 17:18:16.768: INFO: Number of nodes with available pods: 0 Sep 17 17:18:16.768: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:17.775: INFO: Number of nodes with available pods: 0 Sep 17 17:18:17.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:18.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:18.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:19.784: INFO: Number of nodes with available pods: 0 Sep 17 17:18:19.784: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:20.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:20.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:21.775: INFO: Number of nodes with available pods: 0 Sep 17 17:18:21.775: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:22.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:22.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:23.775: INFO: Number of nodes with available pods: 0 Sep 17 17:18:23.775: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:24.774: INFO: Number of nodes with available pods: 0 Sep 17 17:18:24.775: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:25.796: INFO: Number of nodes with available pods: 0 Sep 17 17:18:25.796: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:26.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:26.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:27.796: INFO: Number of nodes with available pods: 0 Sep 17 17:18:27.796: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:28.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:28.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:29.776: INFO: Number of nodes with available pods: 0 Sep 17 17:18:29.776: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:30.775: INFO: Number of nodes with available pods: 0 Sep 17 17:18:30.775: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:18:31.775: INFO: Number of nodes with available pods: 1 Sep 17 17:18:31.775: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9578, will wait for the garbage collector to delete the pods Sep 17 17:18:31.848: INFO: Deleting DaemonSet.extensions daemon-set took: 9.812537ms Sep 17 17:18:32.149: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.719515ms Sep 17 17:18:35.955: INFO: Number of nodes with available pods: 0 Sep 17 17:18:35.955: INFO: Number of running nodes: 0, number of available pods: 0 Sep 17 17:18:35.960: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9578/daemonsets","resourceVersion":"1077290"},"items":null} Sep 17 17:18:35.965: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9578/pods","resourceVersion":"1077290"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:18:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9578" for this suite. • [SLOW TEST:25.588 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":134,"skipped":2119,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:18:36.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-81057c60-15b3-475b-9039-57968cecf585 in namespace container-probe-2502 Sep 17 17:18:40.155: INFO: Started pod liveness-81057c60-15b3-475b-9039-57968cecf585 in namespace container-probe-2502 STEP: checking the pod's current state and verifying that restartCount is present Sep 17 17:18:40.161: INFO: Initial restart count of pod liveness-81057c60-15b3-475b-9039-57968cecf585 is 0 Sep 17 17:19:04.275: INFO: Restart count of pod container-probe-2502/liveness-81057c60-15b3-475b-9039-57968cecf585 is now 1 (24.114404943s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:19:04.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2502" for this suite. • [SLOW TEST:28.244 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2127,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:19:04.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4918 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4918 STEP: Deleting pre-stop pod Sep 17 17:19:17.698: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:19:17.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4918" for this suite. • [SLOW TEST:13.429 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":136,"skipped":2127,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:19:17.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3937 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3937 I0917 17:19:18.234394 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3937, replica count: 2 I0917 17:19:21.285894 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0917 17:19:24.286696 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 17 17:19:24.290: INFO: Creating new exec pod Sep 17 17:19:29.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3937 execpodrkmd9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 17 17:19:33.619: INFO: stderr: "I0917 17:19:33.499688 2167 log.go:172] (0x24b0230) (0x24b02a0) Create stream\nI0917 17:19:33.501629 2167 log.go:172] (0x24b0230) (0x24b02a0) Stream added, broadcasting: 1\nI0917 17:19:33.511721 2167 log.go:172] (0x24b0230) Reply frame received for 1\nI0917 17:19:33.512316 2167 log.go:172] (0x24b0230) (0x28760e0) Create stream\nI0917 17:19:33.512393 2167 log.go:172] (0x24b0230) (0x28760e0) Stream added, broadcasting: 3\nI0917 17:19:33.513844 2167 log.go:172] (0x24b0230) Reply frame received for 3\nI0917 17:19:33.514189 2167 log.go:172] (0x24b0230) (0x2876460) Create stream\nI0917 17:19:33.514303 2167 log.go:172] (0x24b0230) (0x2876460) Stream added, broadcasting: 5\nI0917 17:19:33.515687 2167 log.go:172] (0x24b0230) Reply frame received for 5\nI0917 17:19:33.602830 2167 log.go:172] (0x24b0230) Data frame received for 5\nI0917 17:19:33.603095 2167 log.go:172] (0x2876460) (5) Data frame handling\nI0917 17:19:33.603201 2167 log.go:172] (0x24b0230) Data frame received for 3\nI0917 17:19:33.603315 2167 log.go:172] (0x28760e0) (3) Data frame handling\nI0917 17:19:33.603913 2167 log.go:172] (0x2876460) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0917 17:19:33.605103 2167 log.go:172] (0x24b0230) Data frame received for 5\nI0917 17:19:33.606075 2167 log.go:172] (0x2876460) (5) Data frame handling\nI0917 17:19:33.606189 2167 log.go:172] (0x2876460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0917 17:19:33.606275 2167 log.go:172] (0x24b0230) Data frame received for 1\nI0917 17:19:33.606361 2167 log.go:172] (0x24b02a0) (1) Data frame handling\nI0917 17:19:33.606458 2167 log.go:172] (0x24b02a0) (1) Data frame sent\nI0917 17:19:33.606650 2167 log.go:172] (0x24b0230) Data frame received for 5\nI0917 17:19:33.606766 2167 log.go:172] (0x2876460) (5) Data frame handling\nI0917 17:19:33.607629 2167 log.go:172] (0x24b0230) (0x24b02a0) Stream removed, broadcasting: 1\nI0917 17:19:33.607845 2167 log.go:172] (0x24b0230) Go away received\nI0917 17:19:33.610779 2167 log.go:172] (0x24b0230) (0x24b02a0) Stream removed, broadcasting: 1\nI0917 17:19:33.611027 2167 log.go:172] (0x24b0230) (0x28760e0) Stream removed, broadcasting: 3\nI0917 17:19:33.611204 2167 log.go:172] (0x24b0230) (0x2876460) Stream removed, broadcasting: 5\n" Sep 17 17:19:33.620: INFO: stdout: "" Sep 17 17:19:33.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3937 execpodrkmd9 -- /bin/sh -x -c nc -zv -t -w 2 10.100.184.175 80' Sep 17 17:19:35.019: INFO: stderr: "I0917 17:19:34.891645 2196 log.go:172] (0x2b7d730) (0x2b7d7a0) Create stream\nI0917 17:19:34.894539 2196 log.go:172] (0x2b7d730) (0x2b7d7a0) Stream added, broadcasting: 1\nI0917 17:19:34.908000 2196 log.go:172] (0x2b7d730) Reply frame received for 1\nI0917 17:19:34.908698 2196 log.go:172] (0x2b7d730) (0x24aa930) Create stream\nI0917 17:19:34.908778 2196 log.go:172] (0x2b7d730) (0x24aa930) Stream added, broadcasting: 3\nI0917 17:19:34.910191 2196 log.go:172] (0x2b7d730) Reply frame received for 3\nI0917 17:19:34.910426 2196 log.go:172] (0x2b7d730) (0x26e3260) Create stream\nI0917 17:19:34.910496 2196 log.go:172] (0x2b7d730) (0x26e3260) Stream added, broadcasting: 5\nI0917 17:19:34.911673 2196 log.go:172] (0x2b7d730) Reply frame received for 5\nI0917 17:19:35.003111 2196 log.go:172] (0x2b7d730) Data frame received for 5\nI0917 17:19:35.003308 2196 log.go:172] (0x2b7d730) Data frame received for 1\nI0917 17:19:35.003601 2196 log.go:172] (0x2b7d730) Data frame received for 3\nI0917 17:19:35.003841 2196 log.go:172] (0x2b7d7a0) (1) Data frame handling\nI0917 17:19:35.004085 2196 log.go:172] (0x24aa930) (3) Data frame handling\nI0917 17:19:35.004555 2196 log.go:172] (0x26e3260) (5) Data frame handling\nI0917 17:19:35.006643 2196 log.go:172] (0x26e3260) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.184.175 80\nConnection to 10.100.184.175 80 port [tcp/http] succeeded!\nI0917 17:19:35.007823 2196 log.go:172] (0x2b7d7a0) (1) Data frame sent\nI0917 17:19:35.008015 2196 log.go:172] (0x2b7d730) Data frame received for 5\nI0917 17:19:35.008131 2196 log.go:172] (0x26e3260) (5) Data frame handling\nI0917 17:19:35.008871 2196 log.go:172] (0x2b7d730) (0x2b7d7a0) Stream removed, broadcasting: 1\nI0917 17:19:35.009455 2196 log.go:172] (0x2b7d730) Go away received\nI0917 17:19:35.011805 2196 log.go:172] (0x2b7d730) (0x2b7d7a0) Stream removed, broadcasting: 1\nI0917 17:19:35.012034 2196 log.go:172] (0x2b7d730) (0x24aa930) Stream removed, broadcasting: 3\nI0917 17:19:35.012309 2196 log.go:172] (0x2b7d730) (0x26e3260) Stream removed, broadcasting: 5\n" Sep 17 17:19:35.020: INFO: stdout: "" Sep 17 17:19:35.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3937 execpodrkmd9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.8 32075' Sep 17 17:19:36.348: INFO: stderr: "I0917 17:19:36.252070 2219 log.go:172] (0x29da000) (0x29fc000) Create stream\nI0917 17:19:36.261010 2219 log.go:172] (0x29da000) (0x29fc000) Stream added, broadcasting: 1\nI0917 17:19:36.268297 2219 log.go:172] (0x29da000) Reply frame received for 1\nI0917 17:19:36.268799 2219 log.go:172] (0x29da000) (0x29fc150) Create stream\nI0917 17:19:36.268861 2219 log.go:172] (0x29da000) (0x29fc150) Stream added, broadcasting: 3\nI0917 17:19:36.270067 2219 log.go:172] (0x29da000) Reply frame received for 3\nI0917 17:19:36.270407 2219 log.go:172] (0x29da000) (0x29fc310) Create stream\nI0917 17:19:36.270491 2219 log.go:172] (0x29da000) (0x29fc310) Stream added, broadcasting: 5\nI0917 17:19:36.271878 2219 log.go:172] (0x29da000) Reply frame received for 5\nI0917 17:19:36.329837 2219 log.go:172] (0x29da000) Data frame received for 5\nI0917 17:19:36.330171 2219 log.go:172] (0x29fc310) (5) Data frame handling\nI0917 17:19:36.331155 2219 log.go:172] (0x29da000) Data frame received for 3\nI0917 17:19:36.331303 2219 log.go:172] (0x29fc150) (3) Data frame handling\nI0917 17:19:36.331410 2219 log.go:172] (0x29da000) Data frame received for 1\nI0917 17:19:36.331552 2219 log.go:172] (0x29fc000) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.8 32075\nI0917 17:19:36.331881 2219 log.go:172] (0x29fc000) (1) Data frame sent\nI0917 17:19:36.332650 2219 log.go:172] (0x29fc310) (5) Data frame sent\nI0917 17:19:36.332801 2219 log.go:172] (0x29da000) Data frame received for 5\nI0917 17:19:36.332915 2219 log.go:172] (0x29fc310) (5) Data frame handling\nConnection to 172.18.0.8 32075 port [tcp/32075] succeeded!\nI0917 17:19:36.335219 2219 log.go:172] (0x29fc310) (5) Data frame sent\nI0917 17:19:36.335399 2219 log.go:172] (0x29da000) Data frame received for 5\nI0917 17:19:36.335966 2219 log.go:172] (0x29da000) (0x29fc000) Stream removed, broadcasting: 1\nI0917 17:19:36.337586 2219 log.go:172] (0x29fc310) (5) Data frame handling\nI0917 17:19:36.338016 2219 log.go:172] (0x29da000) Go away received\nI0917 17:19:36.340919 2219 log.go:172] (0x29da000) (0x29fc000) Stream removed, broadcasting: 1\nI0917 17:19:36.341162 2219 log.go:172] (0x29da000) (0x29fc150) Stream removed, broadcasting: 3\nI0917 17:19:36.341350 2219 log.go:172] (0x29da000) (0x29fc310) Stream removed, broadcasting: 5\n" Sep 17 17:19:36.350: INFO: stdout: "" Sep 17 17:19:36.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3937 execpodrkmd9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32075' Sep 17 17:19:37.692: INFO: stderr: "I0917 17:19:37.598870 2243 log.go:172] (0x25d1490) (0x25d15e0) Create stream\nI0917 17:19:37.601879 2243 log.go:172] (0x25d1490) (0x25d15e0) Stream added, broadcasting: 1\nI0917 17:19:37.617246 2243 log.go:172] (0x25d1490) Reply frame received for 1\nI0917 17:19:37.617787 2243 log.go:172] (0x25d1490) (0x2c840e0) Create stream\nI0917 17:19:37.617867 2243 log.go:172] (0x25d1490) (0x2c840e0) Stream added, broadcasting: 3\nI0917 17:19:37.619311 2243 log.go:172] (0x25d1490) Reply frame received for 3\nI0917 17:19:37.619545 2243 log.go:172] (0x25d1490) (0x29f6310) Create stream\nI0917 17:19:37.619614 2243 log.go:172] (0x25d1490) (0x29f6310) Stream added, broadcasting: 5\nI0917 17:19:37.620786 2243 log.go:172] (0x25d1490) Reply frame received for 5\nI0917 17:19:37.673015 2243 log.go:172] (0x25d1490) Data frame received for 3\nI0917 17:19:37.673569 2243 log.go:172] (0x25d1490) Data frame received for 5\nI0917 17:19:37.673806 2243 log.go:172] (0x29f6310) (5) Data frame handling\nI0917 17:19:37.674386 2243 log.go:172] (0x2c840e0) (3) Data frame handling\nI0917 17:19:37.674663 2243 log.go:172] (0x25d1490) Data frame received for 1\nI0917 17:19:37.674816 2243 log.go:172] (0x25d15e0) (1) Data frame handling\nI0917 17:19:37.675550 2243 log.go:172] (0x25d15e0) (1) Data frame sent\nI0917 17:19:37.676370 2243 log.go:172] (0x29f6310) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.10 32075\nConnection to 172.18.0.10 32075 port [tcp/32075] succeeded!\nI0917 17:19:37.678057 2243 log.go:172] (0x25d1490) Data frame received for 5\nI0917 17:19:37.678220 2243 log.go:172] (0x29f6310) (5) Data frame handling\nI0917 17:19:37.679610 2243 log.go:172] (0x25d1490) (0x25d15e0) Stream removed, broadcasting: 1\nI0917 17:19:37.679933 2243 log.go:172] (0x25d1490) Go away received\nI0917 17:19:37.683529 2243 log.go:172] (0x25d1490) (0x25d15e0) Stream removed, broadcasting: 1\nI0917 17:19:37.683811 2243 log.go:172] (0x25d1490) (0x2c840e0) Stream removed, broadcasting: 3\nI0917 17:19:37.684043 2243 log.go:172] (0x25d1490) (0x29f6310) Stream removed, broadcasting: 5\n" Sep 17 17:19:37.693: INFO: stdout: "" Sep 17 17:19:37.693: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:19:37.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3937" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.049 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":137,"skipped":2146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:19:37.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 17 17:19:37.898: INFO: Waiting up to 5m0s for pod "pod-102f1583-934a-46af-9281-f2128b7ffa4e" in namespace "emptydir-6724" to be "success or failure" Sep 17 17:19:37.903: INFO: Pod "pod-102f1583-934a-46af-9281-f2128b7ffa4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34167ms Sep 17 17:19:39.916: INFO: Pod "pod-102f1583-934a-46af-9281-f2128b7ffa4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017670063s Sep 17 17:19:41.922: INFO: Pod "pod-102f1583-934a-46af-9281-f2128b7ffa4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023983259s STEP: Saw pod success Sep 17 17:19:41.923: INFO: Pod "pod-102f1583-934a-46af-9281-f2128b7ffa4e" satisfied condition "success or failure" Sep 17 17:19:41.927: INFO: Trying to get logs from node jerma-worker pod pod-102f1583-934a-46af-9281-f2128b7ffa4e container test-container: STEP: delete the pod Sep 17 17:19:41.968: INFO: Waiting for pod pod-102f1583-934a-46af-9281-f2128b7ffa4e to disappear Sep 17 17:19:41.973: INFO: Pod pod-102f1583-934a-46af-9281-f2128b7ffa4e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:19:41.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6724" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2182,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:19:41.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8840.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8840.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8840.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8840.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8840.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8840.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 17 17:19:50.176: INFO: DNS probes using dns-8840/dns-test-45afe3a7-f527-4436-8e35-7849b0aa8c82 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:19:50.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8840" for this suite. • [SLOW TEST:8.363 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":139,"skipped":2195,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:19:50.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0917 17:20:02.678622 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 17 17:20:02.678: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:20:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9244" for this suite. • [SLOW TEST:12.339 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":140,"skipped":2213,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:20:02.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:20:03.115: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:20:03.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9249" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":141,"skipped":2229,"failed":0} ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:20:03.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-76fd6d71-7bf1-4497-832c-2be53d587fa6 STEP: Creating configMap with name cm-test-opt-upd-c428f98d-a965-474e-9746-74967c672eb1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-76fd6d71-7bf1-4497-832c-2be53d587fa6 STEP: Updating configmap cm-test-opt-upd-c428f98d-a965-474e-9746-74967c672eb1 STEP: Creating configMap with name cm-test-opt-create-2c37a091-3cd7-4294-8509-9e9e23e2d272 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:21:15.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7846" for this suite. • [SLOW TEST:71.424 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2229,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:21:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Sep 17 17:21:15.169: INFO: Waiting up to 5m0s for pod "downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110" in namespace "downward-api-8525" to be "success or failure" Sep 17 17:21:15.199: INFO: Pod "downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110": Phase="Pending", Reason="", readiness=false. Elapsed: 29.931716ms Sep 17 17:21:17.206: INFO: Pod "downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037072031s Sep 17 17:21:19.214: INFO: Pod "downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044231344s STEP: Saw pod success Sep 17 17:21:19.214: INFO: Pod "downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110" satisfied condition "success or failure" Sep 17 17:21:19.219: INFO: Trying to get logs from node jerma-worker pod downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110 container dapi-container: STEP: delete the pod Sep 17 17:21:19.276: INFO: Waiting for pod downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110 to disappear Sep 17 17:21:19.291: INFO: Pod downward-api-c16e99a1-b7c6-4657-8ec6-4d5f1d8d0110 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:21:19.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8525" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:21:19.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Sep 17 17:21:19.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2918' Sep 17 17:21:21.107: INFO: stderr: "" Sep 17 17:21:21.107: INFO: stdout: "pod/pause created\n" Sep 17 17:21:21.107: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 17 17:21:21.108: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2918" to be "running and ready" Sep 17 17:21:21.141: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 33.63187ms Sep 17 17:21:23.147: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03931686s Sep 17 17:21:25.171: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.063506966s Sep 17 17:21:25.172: INFO: Pod "pause" satisfied condition "running and ready" Sep 17 17:21:25.172: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Sep 17 17:21:25.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2918' Sep 17 17:21:26.285: INFO: stderr: "" Sep 17 17:21:26.285: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 17 17:21:26.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2918' Sep 17 17:21:27.393: INFO: stderr: "" Sep 17 17:21:27.394: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 17 17:21:27.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2918' Sep 17 17:21:28.518: INFO: stderr: "" Sep 17 17:21:28.518: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 17 17:21:28.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2918' Sep 17 17:21:29.628: INFO: stderr: "" Sep 17 17:21:29.628: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Sep 17 17:21:29.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2918' Sep 17 17:21:30.771: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 17:21:30.772: INFO: stdout: "pod \"pause\" force deleted\n" Sep 17 17:21:30.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2918' Sep 17 17:21:31.871: INFO: stderr: "No resources found in kubectl-2918 namespace.\n" Sep 17 17:21:31.871: INFO: stdout: "" Sep 17 17:21:31.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2918 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 17 17:21:32.959: INFO: stderr: "" Sep 17 17:21:32.960: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:21:32.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2918" for this suite. • [SLOW TEST:13.664 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":144,"skipped":2283,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:21:32.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:21:33.038: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 17 17:21:50.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2936 create -f -' Sep 17 17:21:55.085: INFO: stderr: "" Sep 17 17:21:55.085: INFO: stdout: "e2e-test-crd-publish-openapi-8064-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 17 17:21:55.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2936 delete e2e-test-crd-publish-openapi-8064-crds test-cr' Sep 17 17:21:56.183: INFO: stderr: "" Sep 17 17:21:56.183: INFO: stdout: "e2e-test-crd-publish-openapi-8064-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 17 17:21:56.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2936 apply -f -' Sep 17 17:21:57.673: INFO: stderr: "" Sep 17 17:21:57.673: INFO: stdout: "e2e-test-crd-publish-openapi-8064-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 17 17:21:57.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2936 delete e2e-test-crd-publish-openapi-8064-crds test-cr' Sep 17 17:21:58.779: INFO: stderr: "" Sep 17 17:21:58.779: INFO: stdout: "e2e-test-crd-publish-openapi-8064-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 17 17:21:58.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8064-crds' Sep 17 17:22:00.246: INFO: stderr: "" Sep 17 17:22:00.246: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8064-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:22:09.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2936" for this suite. • [SLOW TEST:36.785 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":145,"skipped":2284,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:22:09.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 17 17:22:09.912: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:09.942: INFO: Number of nodes with available pods: 0 Sep 17 17:22:09.942: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:22:10.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:10.958: INFO: Number of nodes with available pods: 0 Sep 17 17:22:10.958: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:22:11.955: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:11.959: INFO: Number of nodes with available pods: 0 Sep 17 17:22:11.959: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:22:12.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:12.959: INFO: Number of nodes with available pods: 0 Sep 17 17:22:12.959: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:22:13.955: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:14.002: INFO: Number of nodes with available pods: 1 Sep 17 17:22:14.002: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:22:14.949: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:14.955: INFO: Number of nodes with available pods: 2 Sep 17 17:22:14.955: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 17 17:22:15.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:15.027: INFO: Number of nodes with available pods: 1 Sep 17 17:22:15.027: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:22:16.234: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:16.283: INFO: Number of nodes with available pods: 1 Sep 17 17:22:16.283: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:22:17.035: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:17.041: INFO: Number of nodes with available pods: 1 Sep 17 17:22:17.041: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:22:18.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:18.045: INFO: Number of nodes with available pods: 1 Sep 17 17:22:18.045: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:22:19.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:22:19.044: INFO: Number of nodes with available pods: 2 Sep 17 17:22:19.044: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8585, will wait for the garbage collector to delete the pods Sep 17 17:22:19.117: INFO: Deleting DaemonSet.extensions daemon-set took: 8.628001ms Sep 17 17:22:19.417: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.789404ms Sep 17 17:22:27.823: INFO: Number of nodes with available pods: 0 Sep 17 17:22:27.823: INFO: Number of running nodes: 0, number of available pods: 0 Sep 17 17:22:27.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8585/daemonsets","resourceVersion":"1078580"},"items":null} Sep 17 17:22:27.831: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8585/pods","resourceVersion":"1078580"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:22:27.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8585" for this suite. • [SLOW TEST:18.098 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":146,"skipped":2294,"failed":0} [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:22:27.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-632e1e97-6a68-41a6-87d7-9213e6c0e814 STEP: Creating a pod to test consume secrets Sep 17 17:22:27.973: INFO: Waiting up to 5m0s for pod "pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64" in namespace "secrets-569" to be "success or failure" Sep 17 17:22:28.009: INFO: Pod "pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64": Phase="Pending", Reason="", readiness=false. Elapsed: 35.959151ms Sep 17 17:22:30.016: INFO: Pod "pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042339113s Sep 17 17:22:32.021: INFO: Pod "pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04720293s STEP: Saw pod success Sep 17 17:22:32.021: INFO: Pod "pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64" satisfied condition "success or failure" Sep 17 17:22:32.025: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64 container secret-env-test: STEP: delete the pod Sep 17 17:22:32.044: INFO: Waiting for pod pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64 to disappear Sep 17 17:22:32.048: INFO: Pod pod-secrets-2872d20b-251e-4be9-9b44-581be329ba64 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:22:32.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-569" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2294,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:22:32.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Sep 17 17:22:32.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-68' Sep 17 17:22:33.655: INFO: stderr: "" Sep 17 17:22:33.655: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 17:22:33.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-68' Sep 17 17:22:34.764: INFO: stderr: "" Sep 17 17:22:34.764: INFO: stdout: "update-demo-nautilus-6cnhk update-demo-nautilus-8gpsb " Sep 17 17:22:34.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cnhk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:22:35.908: INFO: stderr: "" Sep 17 17:22:35.908: INFO: stdout: "" Sep 17 17:22:35.908: INFO: update-demo-nautilus-6cnhk is created but not running Sep 17 17:22:40.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-68' Sep 17 17:22:42.033: INFO: stderr: "" Sep 17 17:22:42.033: INFO: stdout: "update-demo-nautilus-6cnhk update-demo-nautilus-8gpsb " Sep 17 17:22:42.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cnhk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:22:43.145: INFO: stderr: "" Sep 17 17:22:43.145: INFO: stdout: "true" Sep 17 17:22:43.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cnhk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:22:44.262: INFO: stderr: "" Sep 17 17:22:44.262: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:22:44.262: INFO: validating pod update-demo-nautilus-6cnhk Sep 17 17:22:44.269: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:22:44.269: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:22:44.269: INFO: update-demo-nautilus-6cnhk is verified up and running Sep 17 17:22:44.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gpsb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:22:45.379: INFO: stderr: "" Sep 17 17:22:45.379: INFO: stdout: "true" Sep 17 17:22:45.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gpsb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:22:46.496: INFO: stderr: "" Sep 17 17:22:46.496: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:22:46.496: INFO: validating pod update-demo-nautilus-8gpsb Sep 17 17:22:46.502: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:22:46.502: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:22:46.502: INFO: update-demo-nautilus-8gpsb is verified up and running STEP: rolling-update to new replication controller Sep 17 17:22:46.517: INFO: scanned /root for discovery docs: Sep 17 17:22:46.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-68' Sep 17 17:23:11.221: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 17 17:23:11.221: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 17:23:11.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-68' Sep 17 17:23:12.331: INFO: stderr: "" Sep 17 17:23:12.332: INFO: stdout: "update-demo-kitten-m8xbf update-demo-kitten-snxth " Sep 17 17:23:12.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m8xbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:23:13.448: INFO: stderr: "" Sep 17 17:23:13.448: INFO: stdout: "true" Sep 17 17:23:13.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m8xbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:23:14.560: INFO: stderr: "" Sep 17 17:23:14.560: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Sep 17 17:23:14.560: INFO: validating pod update-demo-kitten-m8xbf Sep 17 17:23:14.567: INFO: got data: { "image": "kitten.jpg" } Sep 17 17:23:14.568: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Sep 17 17:23:14.568: INFO: update-demo-kitten-m8xbf is verified up and running Sep 17 17:23:14.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snxth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:23:15.699: INFO: stderr: "" Sep 17 17:23:15.699: INFO: stdout: "true" Sep 17 17:23:15.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snxth -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-68' Sep 17 17:23:16.803: INFO: stderr: "" Sep 17 17:23:16.803: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Sep 17 17:23:16.803: INFO: validating pod update-demo-kitten-snxth Sep 17 17:23:16.809: INFO: got data: { "image": "kitten.jpg" } Sep 17 17:23:16.809: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Sep 17 17:23:16.809: INFO: update-demo-kitten-snxth is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:23:16.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-68" for this suite. • [SLOW TEST:44.761 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":148,"skipped":2296,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:23:16.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Sep 17 17:23:16.915: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 17 17:23:16.935: INFO: Waiting for terminating namespaces to be deleted... Sep 17 17:23:16.938: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Sep 17 17:23:16.950: INFO: kube-proxy-4jmbs from kube-system started at 2020-09-13 16:54:28 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.950: INFO: Container kube-proxy ready: true, restart count 0 Sep 17 17:23:16.950: INFO: update-demo-kitten-m8xbf from kubectl-68 started at 2020-09-17 17:22:48 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.950: INFO: Container update-demo ready: true, restart count 0 Sep 17 17:23:16.950: INFO: kindnet-m6c7w from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.950: INFO: Container kindnet-cni ready: true, restart count 0 Sep 17 17:23:16.950: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Sep 17 17:23:16.974: INFO: kube-proxy-2w9xp from kube-system started at 2020-09-13 16:54:31 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.975: INFO: Container kube-proxy ready: true, restart count 0 Sep 17 17:23:16.975: INFO: kindnet-4ckzg from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.975: INFO: Container kindnet-cni ready: true, restart count 0 Sep 17 17:23:16.975: INFO: update-demo-kitten-snxth from kubectl-68 started at 2020-09-17 17:22:55 +0000 UTC (1 container statuses recorded) Sep 17 17:23:16.975: INFO: Container update-demo ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b7ea5a9c-7a31-4d6f-8e27-a92b78c51e65 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b7ea5a9c-7a31-4d6f-8e27-a92b78c51e65 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b7ea5a9c-7a31-4d6f-8e27-a92b78c51e65 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:28:25.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4957" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.396 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":149,"skipped":2310,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:28:25.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 17 17:28:39.844: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 17 17:28:41.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735960519, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735960519, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735960519, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735960519, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 17 17:28:44.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:28:44.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4580-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:28:46.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6774" for this suite. STEP: Destroying namespace "webhook-6774-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":150,"skipped":2322,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:28:46.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 17 17:28:50.434: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2684 PodName:pod-sharedvolume-16802609-bd5b-4068-a732-f9301abf3f50 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 17 17:28:50.434: INFO: >>> kubeConfig: /root/.kube/config I0917 17:28:50.554524 7 log.go:172] (0xa02c380) (0xa02c460) Create stream I0917 17:28:50.554663 7 log.go:172] (0xa02c380) (0xa02c460) Stream added, broadcasting: 1 I0917 17:28:50.558675 7 log.go:172] (0xa02c380) Reply frame received for 1 I0917 17:28:50.558906 7 log.go:172] (0xa02c380) (0x84381c0) Create stream I0917 17:28:50.559064 7 log.go:172] (0xa02c380) (0x84381c0) Stream added, broadcasting: 3 I0917 17:28:50.561010 7 log.go:172] (0xa02c380) Reply frame received for 3 I0917 17:28:50.561231 7 log.go:172] (0xa02c380) (0xa02c620) Create stream I0917 17:28:50.561351 7 log.go:172] (0xa02c380) (0xa02c620) Stream added, broadcasting: 5 I0917 17:28:50.563052 7 log.go:172] (0xa02c380) Reply frame received for 5 I0917 17:28:50.640498 7 log.go:172] (0xa02c380) Data frame received for 3 I0917 17:28:50.640697 7 log.go:172] (0x84381c0) (3) Data frame handling I0917 17:28:50.640810 7 log.go:172] (0xa02c380) Data frame received for 5 I0917 17:28:50.641025 7 log.go:172] (0xa02c620) (5) Data frame handling I0917 17:28:50.641149 7 log.go:172] (0x84381c0) (3) Data frame sent I0917 17:28:50.641288 7 log.go:172] (0xa02c380) Data frame received for 3 I0917 17:28:50.641373 7 log.go:172] (0x84381c0) (3) Data frame handling I0917 17:28:50.642109 7 log.go:172] (0xa02c380) Data frame received for 1 I0917 17:28:50.642216 7 log.go:172] (0xa02c460) (1) Data frame handling I0917 17:28:50.642329 7 log.go:172] (0xa02c460) (1) Data frame sent I0917 17:28:50.642449 7 log.go:172] (0xa02c380) (0xa02c460) Stream removed, broadcasting: 1 I0917 17:28:50.642611 7 log.go:172] (0xa02c380) Go away received I0917 17:28:50.642994 7 log.go:172] (0xa02c380) (0xa02c460) Stream removed, broadcasting: 1 I0917 17:28:50.643117 7 log.go:172] (0xa02c380) (0x84381c0) Stream removed, broadcasting: 3 I0917 17:28:50.643211 7 log.go:172] (0xa02c380) (0xa02c620) Stream removed, broadcasting: 5 Sep 17 17:28:50.643: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:28:50.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2684" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":151,"skipped":2323,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:28:50.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-729/configmap-test-6b958c99-f9fe-4273-91a6-d096c2f17e39 STEP: Creating a pod to test consume configMaps Sep 17 17:28:50.758: INFO: Waiting up to 5m0s for pod "pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de" in namespace "configmap-729" to be "success or failure" Sep 17 17:28:50.771: INFO: Pod "pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de": Phase="Pending", Reason="", readiness=false. Elapsed: 12.361548ms Sep 17 17:28:52.869: INFO: Pod "pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110971649s Sep 17 17:28:54.896: INFO: Pod "pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137364203s STEP: Saw pod success Sep 17 17:28:54.896: INFO: Pod "pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de" satisfied condition "success or failure" Sep 17 17:28:54.919: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de container env-test: STEP: delete the pod Sep 17 17:28:54.994: INFO: Waiting for pod pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de to disappear Sep 17 17:28:54.999: INFO: Pod pod-configmaps-40803484-c4fb-4d06-a333-fc3fbcdf20de no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:28:54.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-729" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2326,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:28:55.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Sep 17 17:28:55.074: INFO: namespace kubectl-138 Sep 17 17:28:55.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-138' Sep 17 17:28:56.654: INFO: stderr: "" Sep 17 17:28:56.654: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Sep 17 17:28:57.663: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:28:57.663: INFO: Found 0 / 1 Sep 17 17:28:58.663: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:28:58.663: INFO: Found 0 / 1 Sep 17 17:28:59.662: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:28:59.663: INFO: Found 1 / 1 Sep 17 17:28:59.663: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 17 17:28:59.669: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:28:59.669: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 17 17:28:59.669: INFO: wait on agnhost-master startup in kubectl-138 Sep 17 17:28:59.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-29ngl agnhost-master --namespace=kubectl-138' Sep 17 17:29:00.815: INFO: stderr: "" Sep 17 17:29:00.815: INFO: stdout: "Paused\n" STEP: exposing RC Sep 17 17:29:00.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-138' Sep 17 17:29:01.991: INFO: stderr: "" Sep 17 17:29:01.991: INFO: stdout: "service/rm2 exposed\n" Sep 17 17:29:01.998: INFO: Service rm2 in namespace kubectl-138 found. STEP: exposing service Sep 17 17:29:04.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-138' Sep 17 17:29:05.232: INFO: stderr: "" Sep 17 17:29:05.232: INFO: stdout: "service/rm3 exposed\n" Sep 17 17:29:05.275: INFO: Service rm3 in namespace kubectl-138 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:29:07.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-138" for this suite. • [SLOW TEST:12.296 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":153,"skipped":2336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:29:07.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:30:07.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8074" for this suite. • [SLOW TEST:60.137 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:30:07.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 17:30:07.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066" in namespace "projected-5688" to be "success or failure" Sep 17 17:30:07.567: INFO: Pod "downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066": Phase="Pending", Reason="", readiness=false. Elapsed: 17.390763ms Sep 17 17:30:09.574: INFO: Pod "downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024450855s Sep 17 17:30:11.596: INFO: Pod "downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046629581s STEP: Saw pod success Sep 17 17:30:11.596: INFO: Pod "downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066" satisfied condition "success or failure" Sep 17 17:30:11.600: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066 container client-container: STEP: delete the pod Sep 17 17:30:11.746: INFO: Waiting for pod downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066 to disappear Sep 17 17:30:11.763: INFO: Pod downwardapi-volume-373c972d-b2e6-435f-85cb-9b46e31b9066 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:30:11.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5688" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2454,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:30:11.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Sep 17 17:30:16.407: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-918 pod-service-account-396cf1b4-bc3a-431e-9ab0-5b569bb7f41d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 17 17:30:17.769: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-918 pod-service-account-396cf1b4-bc3a-431e-9ab0-5b569bb7f41d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 17 17:30:19.116: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-918 pod-service-account-396cf1b4-bc3a-431e-9ab0-5b569bb7f41d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:30:20.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-918" for this suite. • [SLOW TEST:8.725 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":156,"skipped":2455,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:30:20.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9030 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Sep 17 17:30:20.621: INFO: Found 0 stateful pods, waiting for 3 Sep 17 17:30:30.630: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:30:30.630: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:30:30.631: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 17 17:30:30.669: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 17 17:30:40.740: INFO: Updating stateful set ss2 Sep 17 17:30:40.839: INFO: Waiting for Pod statefulset-9030/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 17 17:30:50.981: INFO: Found 2 stateful pods, waiting for 3 Sep 17 17:31:00.989: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:31:00.990: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 17 17:31:00.990: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 17 17:31:01.023: INFO: Updating stateful set ss2 Sep 17 17:31:01.069: INFO: Waiting for Pod statefulset-9030/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 17 17:31:11.105: INFO: Updating stateful set ss2 Sep 17 17:31:11.129: INFO: Waiting for StatefulSet statefulset-9030/ss2 to complete update Sep 17 17:31:11.130: INFO: Waiting for Pod statefulset-9030/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Sep 17 17:31:21.144: INFO: Deleting all statefulset in ns statefulset-9030 Sep 17 17:31:21.148: INFO: Scaling statefulset ss2 to 0 Sep 17 17:31:41.171: INFO: Waiting for statefulset status.replicas updated to 0 Sep 17 17:31:41.176: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:31:41.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9030" for this suite. • [SLOW TEST:80.700 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":157,"skipped":2477,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:31:41.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:31:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7668" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2482,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:31:41.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 17:31:41.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5" in namespace "projected-8205" to be "success or failure" Sep 17 17:31:41.502: INFO: Pod "downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.52793ms Sep 17 17:31:43.509: INFO: Pod "downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053110399s Sep 17 17:31:45.516: INFO: Pod "downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060495485s STEP: Saw pod success Sep 17 17:31:45.516: INFO: Pod "downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5" satisfied condition "success or failure" Sep 17 17:31:45.522: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5 container client-container: STEP: delete the pod Sep 17 17:31:45.573: INFO: Waiting for pod downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5 to disappear Sep 17 17:31:45.578: INFO: Pod downwardapi-volume-e3164420-d83e-4e79-92fb-49e4e88f3cd5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:31:45.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8205" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2485,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:31:45.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:31:45.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8890' Sep 17 17:31:47.151: INFO: stderr: "" Sep 17 17:31:47.151: INFO: stdout: "replicationcontroller/agnhost-master created\n" Sep 17 17:31:47.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8890' Sep 17 17:31:48.899: INFO: stderr: "" Sep 17 17:31:48.900: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Sep 17 17:31:49.929: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:31:49.929: INFO: Found 0 / 1 Sep 17 17:31:50.906: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:31:50.906: INFO: Found 0 / 1 Sep 17 17:31:51.907: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:31:51.907: INFO: Found 1 / 1 Sep 17 17:31:51.907: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 17 17:31:51.912: INFO: Selector matched 1 pods for map[app:agnhost] Sep 17 17:31:51.912: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 17 17:31:51.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-bl9f7 --namespace=kubectl-8890' Sep 17 17:31:53.358: INFO: stderr: "" Sep 17 17:31:53.358: INFO: stdout: "Name: agnhost-master-bl9f7\nNamespace: kubectl-8890\nPriority: 0\nNode: jerma-worker2/172.18.0.10\nStart Time: Thu, 17 Sep 2020 17:31:47 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.20\nIPs:\n IP: 10.244.2.20\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7e9c5fd7d578a4765b9ab55b4b04587a26a18c37c5696c4d47dc0834df3688f0\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 17 Sep 2020 17:31:50 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-42vp9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-42vp9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-42vp9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-8890/agnhost-master-bl9f7 to jerma-worker2\n Normal Pulled 5s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 3s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 3s kubelet, jerma-worker2 Started container agnhost-master\n" Sep 17 17:31:53.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8890' Sep 17 17:31:57.331: INFO: stderr: "" Sep 17 17:31:57.332: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8890\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: agnhost-master-bl9f7\n" Sep 17 17:31:57.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8890' Sep 17 17:31:58.517: INFO: stderr: "" Sep 17 17:31:58.518: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8890\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.214.192\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.20:6379\nSession Affinity: None\nEvents: \n" Sep 17 17:31:58.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Sep 17 17:31:59.751: INFO: stderr: "" Sep 17 17:31:59.751: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 13 Sep 2020 16:53:15 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 17 Sep 2020 17:31:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 17 Sep 2020 17:29:39 +0000 Sun, 13 Sep 2020 16:53:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 17 Sep 2020 17:29:39 +0000 Sun, 13 Sep 2020 16:53:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 17 Sep 2020 17:29:39 +0000 Sun, 13 Sep 2020 16:53:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 17 Sep 2020 17:29:39 +0000 Sun, 13 Sep 2020 16:55:17 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 34d317de6c0f4901af8a95d62cd189bb\n System UUID: 866c93d8-86d4-4a56-a71b-7a976dec3f89\n Boot ID: 6cae8cc9-70fd-486a-9495-a1a7da130c42\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.17.5\n Kube-Proxy Version: v1.17.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-fhdds 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d\n kube-system coredns-6955765f44-gcwhr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d\n kube-system kindnet-vqdk2 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d\n kube-system kube-proxy-5fj45 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d\n local-path-storage local-path-provisioner-58f6947c7-pw6xw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Sep 17 17:31:59.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8890' Sep 17 17:32:00.844: INFO: stderr: "" Sep 17 17:32:00.845: INFO: stdout: "Name: kubectl-8890\nLabels: e2e-framework=kubectl\n e2e-run=838b0961-96dd-41e5-b72d-ccfdfd7426b6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:00.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8890" for this suite. • [SLOW TEST:15.239 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":160,"skipped":2487,"failed":0} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:00.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a9262e73-367e-4955-b297-ae0f5673fe57 STEP: Creating a pod to test consume secrets Sep 17 17:32:00.965: INFO: Waiting up to 5m0s for pod "pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861" in namespace "secrets-3118" to be "success or failure" Sep 17 17:32:00.991: INFO: Pod "pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861": Phase="Pending", Reason="", readiness=false. Elapsed: 25.937748ms Sep 17 17:32:02.998: INFO: Pod "pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033173216s Sep 17 17:32:05.013: INFO: Pod "pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047755487s STEP: Saw pod success Sep 17 17:32:05.013: INFO: Pod "pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861" satisfied condition "success or failure" Sep 17 17:32:05.017: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861 container secret-volume-test: STEP: delete the pod Sep 17 17:32:05.035: INFO: Waiting for pod pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861 to disappear Sep 17 17:32:05.040: INFO: Pod pod-secrets-0d3c3ee6-b5be-4b92-a590-2f2497051861 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:05.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3118" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:05.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:18.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2646" for this suite. • [SLOW TEST:13.218 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":162,"skipped":2516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:18.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:32:18.349: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 17 17:32:19.420: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:19.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8401" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":163,"skipped":2554,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:19.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:26.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8170" for this suite. STEP: Destroying namespace "nsdeletetest-7590" for this suite. Sep 17 17:32:26.735: INFO: Namespace nsdeletetest-7590 was already deleted STEP: Destroying namespace "nsdeletetest-2751" for this suite. • [SLOW TEST:7.209 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":164,"skipped":2560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:26.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ae88c503-9d4f-4fe1-861e-e1b58fc688d1 STEP: Creating a pod to test consume configMaps Sep 17 17:32:26.853: INFO: Waiting up to 5m0s for pod "pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a" in namespace "configmap-247" to be "success or failure" Sep 17 17:32:26.876: INFO: Pod "pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.56968ms Sep 17 17:32:28.884: INFO: Pod "pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030724755s Sep 17 17:32:30.891: INFO: Pod "pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03777329s STEP: Saw pod success Sep 17 17:32:30.891: INFO: Pod "pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a" satisfied condition "success or failure" Sep 17 17:32:30.896: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a container configmap-volume-test: STEP: delete the pod Sep 17 17:32:30.949: INFO: Waiting for pod pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a to disappear Sep 17 17:32:30.969: INFO: Pod pod-configmaps-199d4435-2934-462d-aa96-e2bf39c03f7a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:30.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-247" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2584,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:30.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8251 STEP: creating replication controller nodeport-test in namespace services-8251 I0917 17:32:31.174345 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8251, replica count: 2 I0917 17:32:34.225652 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0917 17:32:37.226346 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 17 17:32:37.226: INFO: Creating new exec pod Sep 17 17:32:42.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8251 execpodqrskm -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 17 17:32:43.643: INFO: stderr: "I0917 17:32:43.529531 3245 log.go:172] (0x26adf10) (0x26adf80) Create stream\nI0917 17:32:43.532572 3245 log.go:172] (0x26adf10) (0x26adf80) Stream added, broadcasting: 1\nI0917 17:32:43.547784 3245 log.go:172] (0x26adf10) Reply frame received for 1\nI0917 17:32:43.548272 3245 log.go:172] (0x26adf10) (0x26ac070) Create stream\nI0917 17:32:43.548343 3245 log.go:172] (0x26adf10) (0x26ac070) Stream added, broadcasting: 3\nI0917 17:32:43.549600 3245 log.go:172] (0x26adf10) Reply frame received for 3\nI0917 17:32:43.549781 3245 log.go:172] (0x26adf10) (0x26ac2a0) Create stream\nI0917 17:32:43.549833 3245 log.go:172] (0x26adf10) (0x26ac2a0) Stream added, broadcasting: 5\nI0917 17:32:43.550981 3245 log.go:172] (0x26adf10) Reply frame received for 5\nI0917 17:32:43.623678 3245 log.go:172] (0x26adf10) Data frame received for 5\nI0917 17:32:43.623986 3245 log.go:172] (0x26ac2a0) (5) Data frame handling\nI0917 17:32:43.624445 3245 log.go:172] (0x26adf10) Data frame received for 3\nI0917 17:32:43.624579 3245 log.go:172] (0x26ac070) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0917 17:32:43.625041 3245 log.go:172] (0x26ac2a0) (5) Data frame sent\nI0917 17:32:43.625383 3245 log.go:172] (0x26adf10) Data frame received for 1\nI0917 17:32:43.625503 3245 log.go:172] (0x26adf80) (1) Data frame handling\nI0917 17:32:43.625590 3245 log.go:172] (0x26adf10) Data frame received for 5\nI0917 17:32:43.625727 3245 log.go:172] (0x26ac2a0) (5) Data frame handling\nI0917 17:32:43.625820 3245 log.go:172] (0x26adf80) (1) Data frame sent\nI0917 17:32:43.627782 3245 log.go:172] (0x26adf10) (0x26adf80) Stream removed, broadcasting: 1\nI0917 17:32:43.629358 3245 log.go:172] (0x26ac2a0) (5) Data frame sent\nI0917 17:32:43.629470 3245 log.go:172] (0x26adf10) Data frame received for 5\nI0917 17:32:43.629558 3245 log.go:172] (0x26ac2a0) (5) Data frame handling\nI0917 17:32:43.629912 3245 log.go:172] (0x26adf10) Go away received\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0917 17:32:43.633575 3245 log.go:172] (0x26adf10) (0x26adf80) Stream removed, broadcasting: 1\nI0917 17:32:43.634183 3245 log.go:172] (0x26adf10) (0x26ac070) Stream removed, broadcasting: 3\nI0917 17:32:43.634379 3245 log.go:172] (0x26adf10) (0x26ac2a0) Stream removed, broadcasting: 5\n" Sep 17 17:32:43.644: INFO: stdout: "" Sep 17 17:32:43.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8251 execpodqrskm -- /bin/sh -x -c nc -zv -t -w 2 10.104.81.65 80' Sep 17 17:32:45.022: INFO: stderr: "I0917 17:32:44.904783 3267 log.go:172] (0x26f5650) (0x26f5ea0) Create stream\nI0917 17:32:44.907615 3267 log.go:172] (0x26f5650) (0x26f5ea0) Stream added, broadcasting: 1\nI0917 17:32:44.923595 3267 log.go:172] (0x26f5650) Reply frame received for 1\nI0917 17:32:44.924223 3267 log.go:172] (0x26f5650) (0x24b42a0) Create stream\nI0917 17:32:44.924310 3267 log.go:172] (0x26f5650) (0x24b42a0) Stream added, broadcasting: 3\nI0917 17:32:44.925783 3267 log.go:172] (0x26f5650) Reply frame received for 3\nI0917 17:32:44.926098 3267 log.go:172] (0x26f5650) (0x2c98070) Create stream\nI0917 17:32:44.926182 3267 log.go:172] (0x26f5650) (0x2c98070) Stream added, broadcasting: 5\nI0917 17:32:44.927279 3267 log.go:172] (0x26f5650) Reply frame received for 5\nI0917 17:32:45.006121 3267 log.go:172] (0x26f5650) Data frame received for 3\nI0917 17:32:45.006397 3267 log.go:172] (0x26f5650) Data frame received for 1\nI0917 17:32:45.006609 3267 log.go:172] (0x24b42a0) (3) Data frame handling\nI0917 17:32:45.006797 3267 log.go:172] (0x26f5ea0) (1) Data frame handling\nI0917 17:32:45.007044 3267 log.go:172] (0x26f5650) Data frame received for 5\nI0917 17:32:45.007229 3267 log.go:172] (0x2c98070) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.81.65 80\nConnection to 10.104.81.65 80 port [tcp/http] succeeded!\nI0917 17:32:45.009165 3267 log.go:172] (0x26f5ea0) (1) Data frame sent\nI0917 17:32:45.009522 3267 log.go:172] (0x2c98070) (5) Data frame sent\nI0917 17:32:45.010235 3267 log.go:172] (0x26f5650) Data frame received for 5\nI0917 17:32:45.010343 3267 log.go:172] (0x2c98070) (5) Data frame handling\nI0917 17:32:45.010523 3267 log.go:172] (0x26f5650) (0x26f5ea0) Stream removed, broadcasting: 1\nI0917 17:32:45.012322 3267 log.go:172] (0x26f5650) Go away received\nI0917 17:32:45.014376 3267 log.go:172] (0x26f5650) (0x26f5ea0) Stream removed, broadcasting: 1\nI0917 17:32:45.014725 3267 log.go:172] (0x26f5650) (0x24b42a0) Stream removed, broadcasting: 3\nI0917 17:32:45.014865 3267 log.go:172] (0x26f5650) (0x2c98070) Stream removed, broadcasting: 5\n" Sep 17 17:32:45.022: INFO: stdout: "" Sep 17 17:32:45.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8251 execpodqrskm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.8 32704' Sep 17 17:32:46.417: INFO: stderr: "I0917 17:32:46.295921 3290 log.go:172] (0x29b21c0) (0x29b2230) Create stream\nI0917 17:32:46.299345 3290 log.go:172] (0x29b21c0) (0x29b2230) Stream added, broadcasting: 1\nI0917 17:32:46.313934 3290 log.go:172] (0x29b21c0) Reply frame received for 1\nI0917 17:32:46.314376 3290 log.go:172] (0x29b21c0) (0x29b23f0) Create stream\nI0917 17:32:46.314435 3290 log.go:172] (0x29b21c0) (0x29b23f0) Stream added, broadcasting: 3\nI0917 17:32:46.315923 3290 log.go:172] (0x29b21c0) Reply frame received for 3\nI0917 17:32:46.316532 3290 log.go:172] (0x29b21c0) (0x24e9f10) Create stream\nI0917 17:32:46.316652 3290 log.go:172] (0x29b21c0) (0x24e9f10) Stream added, broadcasting: 5\nI0917 17:32:46.318383 3290 log.go:172] (0x29b21c0) Reply frame received for 5\nI0917 17:32:46.396784 3290 log.go:172] (0x29b21c0) Data frame received for 5\nI0917 17:32:46.397037 3290 log.go:172] (0x24e9f10) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.8 32704\nI0917 17:32:46.397445 3290 log.go:172] (0x29b21c0) Data frame received for 3\nI0917 17:32:46.397741 3290 log.go:172] (0x29b23f0) (3) Data frame handling\nI0917 17:32:46.397899 3290 log.go:172] (0x24e9f10) (5) Data frame sent\nI0917 17:32:46.398822 3290 log.go:172] (0x29b21c0) Data frame received for 1\nI0917 17:32:46.398909 3290 log.go:172] (0x29b2230) (1) Data frame handling\nI0917 17:32:46.399022 3290 log.go:172] (0x29b2230) (1) Data frame sent\nI0917 17:32:46.399161 3290 log.go:172] (0x29b21c0) Data frame received for 5\nI0917 17:32:46.402133 3290 log.go:172] (0x24e9f10) (5) Data frame handling\nI0917 17:32:46.402959 3290 log.go:172] (0x24e9f10) (5) Data frame sent\nI0917 17:32:46.403309 3290 log.go:172] (0x29b21c0) Data frame received for 5\nI0917 17:32:46.403419 3290 log.go:172] (0x24e9f10) (5) Data frame handling\nConnection to 172.18.0.8 32704 port [tcp/32704] succeeded!\nI0917 17:32:46.406979 3290 log.go:172] (0x29b21c0) (0x29b2230) Stream removed, broadcasting: 1\nI0917 17:32:46.407848 3290 log.go:172] (0x29b21c0) Go away received\nI0917 17:32:46.411252 3290 log.go:172] (0x29b21c0) (0x29b2230) Stream removed, broadcasting: 1\nI0917 17:32:46.411446 3290 log.go:172] (0x29b21c0) (0x29b23f0) Stream removed, broadcasting: 3\nI0917 17:32:46.411607 3290 log.go:172] (0x29b21c0) (0x24e9f10) Stream removed, broadcasting: 5\n" Sep 17 17:32:46.418: INFO: stdout: "" Sep 17 17:32:46.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8251 execpodqrskm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32704' Sep 17 17:32:47.796: INFO: stderr: "I0917 17:32:47.674784 3315 log.go:172] (0x26d56c0) (0x26d5810) Create stream\nI0917 17:32:47.679054 3315 log.go:172] (0x26d56c0) (0x26d5810) Stream added, broadcasting: 1\nI0917 17:32:47.693098 3315 log.go:172] (0x26d56c0) Reply frame received for 1\nI0917 17:32:47.694168 3315 log.go:172] (0x26d56c0) (0x25c6310) Create stream\nI0917 17:32:47.694327 3315 log.go:172] (0x26d56c0) (0x25c6310) Stream added, broadcasting: 3\nI0917 17:32:47.696214 3315 log.go:172] (0x26d56c0) Reply frame received for 3\nI0917 17:32:47.696448 3315 log.go:172] (0x26d56c0) (0x25c6a10) Create stream\nI0917 17:32:47.696508 3315 log.go:172] (0x26d56c0) (0x25c6a10) Stream added, broadcasting: 5\nI0917 17:32:47.697941 3315 log.go:172] (0x26d56c0) Reply frame received for 5\nI0917 17:32:47.775490 3315 log.go:172] (0x26d56c0) Data frame received for 3\nI0917 17:32:47.776038 3315 log.go:172] (0x25c6310) (3) Data frame handling\nI0917 17:32:47.776340 3315 log.go:172] (0x26d56c0) Data frame received for 1\nI0917 17:32:47.776523 3315 log.go:172] (0x26d5810) (1) Data frame handling\nI0917 17:32:47.776661 3315 log.go:172] (0x26d56c0) Data frame received for 5\nI0917 17:32:47.776842 3315 log.go:172] (0x25c6a10) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 32704\nConnection to 172.18.0.10 32704 port [tcp/32704] succeeded!\nI0917 17:32:47.779132 3315 log.go:172] (0x26d5810) (1) Data frame sent\nI0917 17:32:47.779346 3315 log.go:172] (0x25c6a10) (5) Data frame sent\nI0917 17:32:47.779550 3315 log.go:172] (0x26d56c0) Data frame received for 5\nI0917 17:32:47.779685 3315 log.go:172] (0x25c6a10) (5) Data frame handling\nI0917 17:32:47.780700 3315 log.go:172] (0x26d56c0) (0x26d5810) Stream removed, broadcasting: 1\nI0917 17:32:47.782467 3315 log.go:172] (0x26d56c0) Go away received\nI0917 17:32:47.786962 3315 log.go:172] (0x26d56c0) (0x26d5810) Stream removed, broadcasting: 1\nI0917 17:32:47.787285 3315 log.go:172] (0x26d56c0) (0x25c6310) Stream removed, broadcasting: 3\nI0917 17:32:47.787568 3315 log.go:172] (0x26d56c0) (0x25c6a10) Stream removed, broadcasting: 5\n" Sep 17 17:32:47.797: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:47.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8251" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.829 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":166,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:47.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:32:47.913: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:48.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1259" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":167,"skipped":2623,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:48.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 17 17:32:49.063: INFO: Waiting up to 5m0s for pod "pod-528c440b-f786-47f9-94a4-88ae9bb26885" in namespace "emptydir-3410" to be "success or failure" Sep 17 17:32:49.068: INFO: Pod "pod-528c440b-f786-47f9-94a4-88ae9bb26885": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470525ms Sep 17 17:32:51.075: INFO: Pod "pod-528c440b-f786-47f9-94a4-88ae9bb26885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011151094s Sep 17 17:32:53.101: INFO: Pod "pod-528c440b-f786-47f9-94a4-88ae9bb26885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037602088s STEP: Saw pod success Sep 17 17:32:53.101: INFO: Pod "pod-528c440b-f786-47f9-94a4-88ae9bb26885" satisfied condition "success or failure" Sep 17 17:32:53.313: INFO: Trying to get logs from node jerma-worker2 pod pod-528c440b-f786-47f9-94a4-88ae9bb26885 container test-container: STEP: delete the pod Sep 17 17:32:53.560: INFO: Waiting for pod pod-528c440b-f786-47f9-94a4-88ae9bb26885 to disappear Sep 17 17:32:53.581: INFO: Pod pod-528c440b-f786-47f9-94a4-88ae9bb26885 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:53.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3410" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2637,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:53.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:32:58.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7963" for this suite. • [SLOW TEST:5.241 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":169,"skipped":2661,"failed":0} [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:32:59.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 17 17:32:59.148: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:33:17.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4994" for this suite. • [SLOW TEST:18.729 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2661,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:33:17.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:33:21.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-714" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2662,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:33:21.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Sep 17 17:33:21.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6436' Sep 17 17:33:23.468: INFO: stderr: "" Sep 17 17:33:23.468: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 17:33:23.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:24.584: INFO: stderr: "" Sep 17 17:33:24.585: INFO: stdout: "update-demo-nautilus-fpcsj update-demo-nautilus-tlshf " Sep 17 17:33:24.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpcsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:25.750: INFO: stderr: "" Sep 17 17:33:25.750: INFO: stdout: "" Sep 17 17:33:25.750: INFO: update-demo-nautilus-fpcsj is created but not running Sep 17 17:33:30.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:31.859: INFO: stderr: "" Sep 17 17:33:31.860: INFO: stdout: "update-demo-nautilus-fpcsj update-demo-nautilus-tlshf " Sep 17 17:33:31.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpcsj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:32.981: INFO: stderr: "" Sep 17 17:33:32.981: INFO: stdout: "true" Sep 17 17:33:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpcsj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:34.047: INFO: stderr: "" Sep 17 17:33:34.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:33:34.047: INFO: validating pod update-demo-nautilus-fpcsj Sep 17 17:33:34.054: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:33:34.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:33:34.054: INFO: update-demo-nautilus-fpcsj is verified up and running Sep 17 17:33:34.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:35.143: INFO: stderr: "" Sep 17 17:33:35.143: INFO: stdout: "true" Sep 17 17:33:35.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:36.226: INFO: stderr: "" Sep 17 17:33:36.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:33:36.226: INFO: validating pod update-demo-nautilus-tlshf Sep 17 17:33:36.232: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:33:36.233: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:33:36.233: INFO: update-demo-nautilus-tlshf is verified up and running STEP: scaling down the replication controller Sep 17 17:33:36.246: INFO: scanned /root for discovery docs: Sep 17 17:33:36.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6436' Sep 17 17:33:37.376: INFO: stderr: "" Sep 17 17:33:37.376: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 17:33:37.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:38.492: INFO: stderr: "" Sep 17 17:33:38.492: INFO: stdout: "update-demo-nautilus-fpcsj update-demo-nautilus-tlshf " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 17 17:33:43.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:44.630: INFO: stderr: "" Sep 17 17:33:44.630: INFO: stdout: "update-demo-nautilus-tlshf " Sep 17 17:33:44.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:45.738: INFO: stderr: "" Sep 17 17:33:45.739: INFO: stdout: "true" Sep 17 17:33:45.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:46.852: INFO: stderr: "" Sep 17 17:33:46.852: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:33:46.852: INFO: validating pod update-demo-nautilus-tlshf Sep 17 17:33:46.857: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:33:46.857: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:33:46.858: INFO: update-demo-nautilus-tlshf is verified up and running STEP: scaling up the replication controller Sep 17 17:33:46.869: INFO: scanned /root for discovery docs: Sep 17 17:33:46.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6436' Sep 17 17:33:48.023: INFO: stderr: "" Sep 17 17:33:48.024: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 17 17:33:48.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:49.129: INFO: stderr: "" Sep 17 17:33:49.129: INFO: stdout: "update-demo-nautilus-gfr4f update-demo-nautilus-tlshf " Sep 17 17:33:49.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfr4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:50.215: INFO: stderr: "" Sep 17 17:33:50.216: INFO: stdout: "" Sep 17 17:33:50.216: INFO: update-demo-nautilus-gfr4f is created but not running Sep 17 17:33:55.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6436' Sep 17 17:33:56.389: INFO: stderr: "" Sep 17 17:33:56.390: INFO: stdout: "update-demo-nautilus-gfr4f update-demo-nautilus-tlshf " Sep 17 17:33:56.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfr4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:57.640: INFO: stderr: "" Sep 17 17:33:57.640: INFO: stdout: "true" Sep 17 17:33:57.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfr4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:58.746: INFO: stderr: "" Sep 17 17:33:58.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:33:58.746: INFO: validating pod update-demo-nautilus-gfr4f Sep 17 17:33:58.752: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:33:58.753: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:33:58.753: INFO: update-demo-nautilus-gfr4f is verified up and running Sep 17 17:33:58.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:33:59.887: INFO: stderr: "" Sep 17 17:33:59.887: INFO: stdout: "true" Sep 17 17:33:59.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tlshf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6436' Sep 17 17:34:01.006: INFO: stderr: "" Sep 17 17:34:01.006: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 17 17:34:01.006: INFO: validating pod update-demo-nautilus-tlshf Sep 17 17:34:01.012: INFO: got data: { "image": "nautilus.jpg" } Sep 17 17:34:01.013: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 17 17:34:01.013: INFO: update-demo-nautilus-tlshf is verified up and running STEP: using delete to clean up resources Sep 17 17:34:01.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6436' Sep 17 17:34:02.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 17 17:34:02.089: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 17 17:34:02.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6436' Sep 17 17:34:03.246: INFO: stderr: "No resources found in kubectl-6436 namespace.\n" Sep 17 17:34:03.247: INFO: stdout: "" Sep 17 17:34:03.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6436 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 17 17:34:04.386: INFO: stderr: "" Sep 17 17:34:04.387: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:04.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6436" for this suite. • [SLOW TEST:42.470 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":172,"skipped":2683,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:04.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 17 17:34:04.515: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:04.519: INFO: Number of nodes with available pods: 0 Sep 17 17:34:04.519: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:05.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:05.533: INFO: Number of nodes with available pods: 0 Sep 17 17:34:05.533: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:06.529: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:06.536: INFO: Number of nodes with available pods: 0 Sep 17 17:34:06.537: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:07.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:07.615: INFO: Number of nodes with available pods: 0 Sep 17 17:34:07.615: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:08.546: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:08.552: INFO: Number of nodes with available pods: 1 Sep 17 17:34:08.552: INFO: Node jerma-worker2 is running more than one daemon pod Sep 17 17:34:09.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:09.544: INFO: Number of nodes with available pods: 2 Sep 17 17:34:09.544: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 17 17:34:09.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:09.609: INFO: Number of nodes with available pods: 1 Sep 17 17:34:09.609: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:10.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:10.626: INFO: Number of nodes with available pods: 1 Sep 17 17:34:10.626: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:11.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:11.625: INFO: Number of nodes with available pods: 1 Sep 17 17:34:11.625: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:12.631: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:12.638: INFO: Number of nodes with available pods: 1 Sep 17 17:34:12.638: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:13.621: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:13.627: INFO: Number of nodes with available pods: 1 Sep 17 17:34:13.627: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:14.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:14.626: INFO: Number of nodes with available pods: 1 Sep 17 17:34:14.626: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:15.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:15.624: INFO: Number of nodes with available pods: 1 Sep 17 17:34:15.624: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:16.621: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:16.628: INFO: Number of nodes with available pods: 1 Sep 17 17:34:16.628: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:17.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:17.646: INFO: Number of nodes with available pods: 1 Sep 17 17:34:17.646: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:18.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:18.628: INFO: Number of nodes with available pods: 1 Sep 17 17:34:18.628: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:19.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:19.625: INFO: Number of nodes with available pods: 1 Sep 17 17:34:19.625: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:20.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:20.627: INFO: Number of nodes with available pods: 1 Sep 17 17:34:20.627: INFO: Node jerma-worker is running more than one daemon pod Sep 17 17:34:21.619: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 17 17:34:21.626: INFO: Number of nodes with available pods: 2 Sep 17 17:34:21.626: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9201, will wait for the garbage collector to delete the pods Sep 17 17:34:21.695: INFO: Deleting DaemonSet.extensions daemon-set took: 8.28025ms Sep 17 17:34:21.795: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.930508ms Sep 17 17:34:27.702: INFO: Number of nodes with available pods: 0 Sep 17 17:34:27.702: INFO: Number of running nodes: 0, number of available pods: 0 Sep 17 17:34:27.707: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9201/daemonsets","resourceVersion":"1082102"},"items":null} Sep 17 17:34:27.711: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9201/pods","resourceVersion":"1082102"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:27.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9201" for this suite. • [SLOW TEST:23.336 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":173,"skipped":2691,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:27.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Sep 17 17:34:27.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011" in namespace "downward-api-8871" to be "success or failure" Sep 17 17:34:27.915: INFO: Pod "downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011": Phase="Pending", Reason="", readiness=false. Elapsed: 67.29214ms Sep 17 17:34:29.922: INFO: Pod "downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074217148s Sep 17 17:34:31.928: INFO: Pod "downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080903992s STEP: Saw pod success Sep 17 17:34:31.929: INFO: Pod "downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011" satisfied condition "success or failure" Sep 17 17:34:31.933: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011 container client-container: STEP: delete the pod Sep 17 17:34:31.971: INFO: Waiting for pod downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011 to disappear Sep 17 17:34:31.986: INFO: Pod downwardapi-volume-0ec1dbf8-71ac-4397-975d-e78629887011 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:31.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8871" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:32.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 17 17:34:32.093: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3721 /api/v1/namespaces/watch-3721/configmaps/e2e-watch-test-watch-closed 1bde6b6a-7d50-4603-8aec-5e93441e7a4a 1082133 0 2020-09-17 17:34:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 17 17:34:32.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3721 /api/v1/namespaces/watch-3721/configmaps/e2e-watch-test-watch-closed 1bde6b6a-7d50-4603-8aec-5e93441e7a4a 1082134 0 2020-09-17 17:34:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 17 17:34:32.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3721 /api/v1/namespaces/watch-3721/configmaps/e2e-watch-test-watch-closed 1bde6b6a-7d50-4603-8aec-5e93441e7a4a 1082136 0 2020-09-17 17:34:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 17 17:34:32.141: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3721 /api/v1/namespaces/watch-3721/configmaps/e2e-watch-test-watch-closed 1bde6b6a-7d50-4603-8aec-5e93441e7a4a 1082138 0 2020-09-17 17:34:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:32.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3721" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":175,"skipped":2777,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:32.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Sep 17 17:34:32.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Sep 17 17:34:33.364: INFO: stderr: "" Sep 17 17:34:33.364: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:33863\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:33863/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:33.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5972" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":176,"skipped":2784,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:33.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587 [It] should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 17 17:34:33.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-369' Sep 17 17:34:34.628: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 17 17:34:34.628: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Sep 17 17:34:34.645: INFO: scanned /root for discovery docs: Sep 17 17:34:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-369' Sep 17 17:34:52.264: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 17 17:34:52.264: INFO: stdout: "Created e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724\nScaling up e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Sep 17 17:34:52.264: INFO: stdout: "Created e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724\nScaling up e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Sep 17 17:34:52.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-369' Sep 17 17:34:53.371: INFO: stderr: "" Sep 17 17:34:53.371: INFO: stdout: "e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724-t4kl4 " Sep 17 17:34:53.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724-t4kl4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-369' Sep 17 17:34:54.492: INFO: stderr: "" Sep 17 17:34:54.492: INFO: stdout: "true" Sep 17 17:34:54.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724-t4kl4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-369' Sep 17 17:34:55.598: INFO: stderr: "" Sep 17 17:34:55.598: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Sep 17 17:34:55.598: INFO: e2e-test-httpd-rc-3cb0a796b18546856901a710d0fcf724-t4kl4 is verified up and running [AfterEach] Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593 Sep 17 17:34:55.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-369' Sep 17 17:34:56.705: INFO: stderr: "" Sep 17 17:34:56.705: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:56.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-369" for this suite. • [SLOW TEST:23.352 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":177,"skipped":2786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:56.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Sep 17 17:34:56.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-121" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":178,"skipped":2842,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Sep 17 17:34:56.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Sep 17 17:34:57.050: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:34:57.769: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a336aedd-52df-4f37-83ee-625e4781e76c" in namespace "security-context-test-238" to be "success or failure"
Sep 17 17:34:58.627: INFO: Pod "busybox-readonly-false-a336aedd-52df-4f37-83ee-625e4781e76c": Phase="Pending", Reason="", readiness=false. Elapsed: 857.934651ms
Sep 17 17:35:00.634: INFO: Pod "busybox-readonly-false-a336aedd-52df-4f37-83ee-625e4781e76c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8645468s
Sep 17 17:35:02.639: INFO: Pod "busybox-readonly-false-a336aedd-52df-4f37-83ee-625e4781e76c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.869941239s
Sep 17 17:35:02.640: INFO: Pod "busybox-readonly-false-a336aedd-52df-4f37-83ee-625e4781e76c" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:02.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-238" for this suite.

• [SLOW TEST:5.054 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2929,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:02.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:35:02.734: INFO: Creating deployment "webserver-deployment"
Sep 17 17:35:02.778: INFO: Waiting for observed generation 1
Sep 17 17:35:04.809: INFO: Waiting for all required pods to come up
Sep 17 17:35:05.082: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Sep 17 17:35:15.172: INFO: Waiting for deployment "webserver-deployment" to complete
Sep 17 17:35:15.182: INFO: Updating deployment "webserver-deployment" with a non-existent image
Sep 17 17:35:15.196: INFO: Updating deployment webserver-deployment
Sep 17 17:35:15.197: INFO: Waiting for observed generation 2
Sep 17 17:35:17.261: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Sep 17 17:35:17.266: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Sep 17 17:35:17.270: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Sep 17 17:35:17.285: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Sep 17 17:35:17.285: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Sep 17 17:35:17.289: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Sep 17 17:35:17.298: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Sep 17 17:35:17.298: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Sep 17 17:35:17.307: INFO: Updating deployment webserver-deployment
Sep 17 17:35:17.307: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Sep 17 17:35:17.395: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Sep 17 17:35:17.496: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Sep 17 17:35:17.720: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-6521 /apis/apps/v1/namespaces/deployment-6521/deployments/webserver-deployment 7c045395-0501-430b-989d-bc02b1f0a770 1082639 3 2020-09-17 17:35:02 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9444af8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-09-17 17:35:16 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-17 17:35:17 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Sep 17 17:35:17.841: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-6521 /apis/apps/v1/namespaces/deployment-6521/replicasets/webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 1082678 3 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7c045395-0501-430b-989d-bc02b1f0a770 0x9d81fc7 0x9d81fc8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xa52c078  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:35:17.841: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Sep 17 17:35:17.842: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-6521 /apis/apps/v1/namespaces/deployment-6521/replicasets/webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 1082672 3 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7c045395-0501-430b-989d-bc02b1f0a770 0x9d81f07 0x9d81f08}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9d81f68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:35:17.926: INFO: Pod "webserver-deployment-595b5b9587-4jw7c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4jw7c webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-4jw7c 7c7d3ccc-239a-4d9d-ae77-2b91433df0ad 1082677 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52c517 0xa52c518}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:35:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.928: INFO: Pod "webserver-deployment-595b5b9587-5cph6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5cph6 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-5cph6 56426e6e-c8ef-4188-9a12-f8ca05c66d64 1082664 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52c7b7 0xa52c7b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.929: INFO: Pod "webserver-deployment-595b5b9587-62698" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-62698 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-62698 1b789639-0c1c-466e-ab99-0c73641ab903 1082667 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52ca07 0xa52ca08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.930: INFO: Pod "webserver-deployment-595b5b9587-7r8lz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7r8lz webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-7r8lz f7a9ede6-f595-4a72-9987-82a53d5fd612 1082484 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52cc27 0xa52cc28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.31,StartTime:2020-09-17 17:35:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89e71289e7e70205ab8807c805dbe38ce90dfb7307cda27f8a3e4606972493e7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.931: INFO: Pod "webserver-deployment-595b5b9587-8p6hm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8p6hm webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-8p6hm 330e6f87-9369-46d9-8cf8-aae332fc365c 1082663 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52ce87 0xa52ce88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-09-17 17:35:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.933: INFO: Pod "webserver-deployment-595b5b9587-c8h4w" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c8h4w webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-c8h4w 34438fb3-b5b0-4cb3-a0d8-cc6c79fe9580 1082543 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52cff7 0xa52cff8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.35,StartTime:2020-09-17 17:35:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dd031a3e25683981f3af00533507b16608dc7e92c114ce07e5218d1106368677,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.934: INFO: Pod "webserver-deployment-595b5b9587-c8zh6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c8zh6 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-c8zh6 d853e330-3175-4b08-a139-c13561f17d95 1082652 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52d177 0xa52d178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.935: INFO: Pod "webserver-deployment-595b5b9587-gjxxk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gjxxk webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-gjxxk c5a330c9-ec78-451b-8508-f0c55fc2acb3 1082665 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52d357 0xa52d358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.935: INFO: Pod "webserver-deployment-595b5b9587-gsngs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gsngs webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-gsngs a6c5c194-2636-4026-895c-b6b7d2754e7b 1082648 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52d587 0xa52d588}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.937: INFO: Pod "webserver-deployment-595b5b9587-kxsxb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kxsxb webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-kxsxb a9083af4-9887-4c85-8260-599986c60cb5 1082549 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52d7a7 0xa52d7a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.33,StartTime:2020-09-17 17:35:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://18ff6cd3d2108ab1e606e3a70a2ef424cc16c1077218260e91ac37642ea0e54d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.938: INFO: Pod "webserver-deployment-595b5b9587-mphcr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mphcr webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-mphcr 6d8a7a76-ac7e-404e-b5a1-9e534ea7c076 1082506 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52d927 0xa52d928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.32,StartTime:2020-09-17 17:35:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef2ab4813e5d97bfc09a272ccbd18e10d4b2b882760ba64d9e32eea8df72b24c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.939: INFO: Pod "webserver-deployment-595b5b9587-pq8sl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pq8sl webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-pq8sl fcb8abd2-d60c-4500-a88b-2d622ab32846 1082669 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52db07 0xa52db08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.941: INFO: Pod "webserver-deployment-595b5b9587-psvdk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-psvdk webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-psvdk 635fec99-c5ea-47b9-b2ea-fdb41858a4c1 1082682 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52dcd7 0xa52dcd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-09-17 17:35:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.942: INFO: Pod "webserver-deployment-595b5b9587-qdv97" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qdv97 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-qdv97 0d0df38a-d064-4dfb-8fbb-fac4b656dc63 1082534 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa52de97 0xa52de98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.1.88,StartTime:2020-09-17 17:35:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fd34ee5286906312a1b14c0d573cda9c0b14e00afeff3dfd6e5dc552525cea61,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.943: INFO: Pod "webserver-deployment-595b5b9587-qfrgt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qfrgt webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-qfrgt be1c8215-052a-4500-b257-aaaebb8d1fab 1082495 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa424027 0xa424028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.1.86,StartTime:2020-09-17 17:35:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://061529315b221d84726f0e4eeb529b9324f16f06d02b5ee9c679648477acdaf6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.944: INFO: Pod "webserver-deployment-595b5b9587-rcmtm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rcmtm webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-rcmtm e24c7048-6948-412e-8ccd-dc161136f25d 1082538 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa4241a7 0xa4241a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.1.90,StartTime:2020-09-17 17:35:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1127d422daba8c468a814b479d8e159b6b0a4b5566bac438c880296aaadd594a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.945: INFO: Pod "webserver-deployment-595b5b9587-rf925" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rf925 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-rf925 570ac44e-4fb7-434a-8372-23f5e0286a4c 1082644 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa424327 0xa424328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.946: INFO: Pod "webserver-deployment-595b5b9587-w5x6g" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5x6g webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-w5x6g 7f28b6c5-6811-4e56-a8a6-ecdb8570a39d 1082670 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa424447 0xa424448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.948: INFO: Pod "webserver-deployment-595b5b9587-xv2s5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xv2s5 webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-xv2s5 c5f19642-debe-4daa-9a97-79ec83850ab0 1082688 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa424567 0xa424568}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:35:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.949: INFO: Pod "webserver-deployment-595b5b9587-zw74g" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zw74g webserver-deployment-595b5b9587- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-595b5b9587-zw74g ad73f4e4-6546-4ac1-8028-32c69210f33f 1082519 0 2020-09-17 17:35:02 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 456418b2-995f-4652-810a-f70de8cbe1fe 0xa4246d7 0xa4246d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.1.87,StartTime:2020-09-17 17:35:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:35:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f1e95851c92c584e94acd9ff41e959a8ee78ad474164827b954264b577ceb09b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.950: INFO: Pod "webserver-deployment-c7997dcc8-2tqw2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2tqw2 webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-2tqw2 6332eceb-e542-4891-bd30-9011dcd9df1e 1082658 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa424917 0xa424918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.951: INFO: Pod "webserver-deployment-c7997dcc8-7pjwx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7pjwx webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-7pjwx 3c8f7dcd-df97-47b6-9688-30420a17c5c2 1082593 0 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa424a47 0xa424a48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:35:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.952: INFO: Pod "webserver-deployment-c7997dcc8-bf8kc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bf8kc webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-bf8kc d759a753-e97f-469f-aea6-a60f680dc2ef 1082668 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa424bf7 0xa424bf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.953: INFO: Pod "webserver-deployment-c7997dcc8-dmqht" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dmqht webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-dmqht bddd75ab-fb78-4cd2-8d25-f0b939a32ee2 1082609 0 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa424d27 0xa424d28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:35:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.955: INFO: Pod "webserver-deployment-c7997dcc8-fhw5c" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fhw5c webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-fhw5c 8309cc64-a8f4-4ca4-ab05-b278732ea19f 1082585 0 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa424ea7 0xa424ea8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:35:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.955: INFO: Pod "webserver-deployment-c7997dcc8-gb8bn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gb8bn webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-gb8bn b5cdfac4-25ba-4616-ab18-06e19601dff9 1082643 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa425027 0xa425028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.957: INFO: Pod "webserver-deployment-c7997dcc8-jkp4f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jkp4f webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-jkp4f 294d9891-a91b-406d-b2f6-1148de96c7bf 1082580 0 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa425157 0xa425158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-09-17 17:35:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.958: INFO: Pod "webserver-deployment-c7997dcc8-pcxzt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pcxzt webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-pcxzt 4c715862-4811-4fc5-9372-44da16c8d3fa 1082662 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa4252d7 0xa4252d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.959: INFO: Pod "webserver-deployment-c7997dcc8-q458h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q458h webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-q458h 038ec162-5449-4474-9817-f06e738bde10 1082661 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa425407 0xa425408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.961: INFO: Pod "webserver-deployment-c7997dcc8-rbm7f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rbm7f webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-rbm7f ea2ad020-ff3d-4a89-8b60-45cbd5fa19f5 1082612 0 2020-09-17 17:35:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa425537 0xa425538}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-09-17 17:35:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.962: INFO: Pod "webserver-deployment-c7997dcc8-s4vml" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s4vml webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-s4vml 37114a8f-1671-4445-b887-8878f06b765e 1082673 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa4256b7 0xa4256b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.963: INFO: Pod "webserver-deployment-c7997dcc8-spfrf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-spfrf webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-spfrf fd5a0074-7eeb-4d9c-85a2-c863ff7627fe 1082641 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa4257e7 0xa4257e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 17:35:17.964: INFO: Pod "webserver-deployment-c7997dcc8-ztbq5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ztbq5 webserver-deployment-c7997dcc8- deployment-6521 /api/v1/namespaces/deployment-6521/pods/webserver-deployment-c7997dcc8-ztbq5 e8cd9aef-8754-43e7-9919-1ec64d1a47c0 1082651 0 2020-09-17 17:35:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dfc977cf-1909-4fa6-b6bf-368b5ecebbde 0xa425917 0xa425918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rw68g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rw68g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rw68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:17.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6521" for this suite.

• [SLOW TEST:15.436 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":181,"skipped":2938,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:18.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-832cde90-b050-4cb6-a694-ff858582285b
STEP: Creating a pod to test consume secrets
Sep 17 17:35:18.278: INFO: Waiting up to 5m0s for pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36" in namespace "secrets-9810" to be "success or failure"
Sep 17 17:35:18.298: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 19.61495ms
Sep 17 17:35:22.088: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.810372719s
Sep 17 17:35:24.300: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021779128s
Sep 17 17:35:27.794: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 9.515381863s
Sep 17 17:35:29.872: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 11.594138763s
Sep 17 17:35:32.229: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 13.951004234s
Sep 17 17:35:34.420: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 16.141385326s
Sep 17 17:35:36.568: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Pending", Reason="", readiness=false. Elapsed: 18.290361756s
Sep 17 17:35:39.193: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Running", Reason="", readiness=true. Elapsed: 20.914468784s
Sep 17 17:35:41.503: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Running", Reason="", readiness=true. Elapsed: 23.225347298s
Sep 17 17:35:43.510: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Running", Reason="", readiness=true. Elapsed: 25.232115683s
Sep 17 17:35:45.517: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.239153331s
STEP: Saw pod success
Sep 17 17:35:45.518: INFO: Pod "pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36" satisfied condition "success or failure"
Sep 17 17:35:45.522: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36 container secret-volume-test: 
STEP: delete the pod
Sep 17 17:35:45.607: INFO: Waiting for pod pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36 to disappear
Sep 17 17:35:45.614: INFO: Pod pod-secrets-8c7fd119-ebd1-44ca-8940-3fa47517cb36 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:45.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9810" for this suite.

• [SLOW TEST:27.533 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2949,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:45.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-8c006289-7d54-4b1c-b54f-9c8f795cc091
STEP: Creating a pod to test consume configMaps
Sep 17 17:35:45.811: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a" in namespace "configmap-1624" to be "success or failure"
Sep 17 17:35:45.816: INFO: Pod "pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882324ms
Sep 17 17:35:47.822: INFO: Pod "pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009998538s
Sep 17 17:35:49.828: INFO: Pod "pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016177127s
STEP: Saw pod success
Sep 17 17:35:49.828: INFO: Pod "pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a" satisfied condition "success or failure"
Sep 17 17:35:49.833: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a container configmap-volume-test: 
STEP: delete the pod
Sep 17 17:35:49.954: INFO: Waiting for pod pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a to disappear
Sep 17 17:35:49.959: INFO: Pod pod-configmaps-ec9b1cea-6220-4ea0-bf04-31875fc5008a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1624" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2959,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:49.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-889d8a49-eafd-42f9-83c2-c3175233470b
STEP: Creating a pod to test consume secrets
Sep 17 17:35:50.085: INFO: Waiting up to 5m0s for pod "pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612" in namespace "secrets-6673" to be "success or failure"
Sep 17 17:35:50.098: INFO: Pod "pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612": Phase="Pending", Reason="", readiness=false. Elapsed: 12.333369ms
Sep 17 17:35:52.106: INFO: Pod "pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019886125s
Sep 17 17:35:54.113: INFO: Pod "pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027367194s
STEP: Saw pod success
Sep 17 17:35:54.113: INFO: Pod "pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612" satisfied condition "success or failure"
Sep 17 17:35:54.119: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612 container secret-volume-test: 
STEP: delete the pod
Sep 17 17:35:54.149: INFO: Waiting for pod pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612 to disappear
Sep 17 17:35:54.169: INFO: Pod pod-secrets-b52b9603-f0cf-413f-945a-4ebc38c6b612 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:54.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6673" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2999,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:54.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-6e083365-1b02-4664-95c7-f0a7af9bcbf5
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:54.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7591" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":185,"skipped":3008,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:54.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:35:54.540: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"677a3cb5-6532-448d-b50f-e9da333ddc59", Controller:(*bool)(0x64d6d8a), BlockOwnerDeletion:(*bool)(0x64d6d8b)}}
Sep 17 17:35:54.565: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6febc240-4aeb-4ecf-afa0-bb1f5b905837", Controller:(*bool)(0x8d7ef8a), BlockOwnerDeletion:(*bool)(0x8d7ef8b)}}
Sep 17 17:35:54.573: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"42f9b20e-cfc6-418d-972f-4d46854181bf", Controller:(*bool)(0x64d7072), BlockOwnerDeletion:(*bool)(0x64d7073)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:35:59.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6716" for this suite.

• [SLOW TEST:5.311 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":186,"skipped":3012,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:35:59.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 17 17:35:59.723: INFO: Waiting up to 5m0s for pod "pod-031c98d3-99ae-422c-9ab1-1aa602db6251" in namespace "emptydir-7376" to be "success or failure"
Sep 17 17:35:59.777: INFO: Pod "pod-031c98d3-99ae-422c-9ab1-1aa602db6251": Phase="Pending", Reason="", readiness=false. Elapsed: 53.532481ms
Sep 17 17:36:01.782: INFO: Pod "pod-031c98d3-99ae-422c-9ab1-1aa602db6251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059098155s
Sep 17 17:36:03.790: INFO: Pod "pod-031c98d3-99ae-422c-9ab1-1aa602db6251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066721302s
STEP: Saw pod success
Sep 17 17:36:03.790: INFO: Pod "pod-031c98d3-99ae-422c-9ab1-1aa602db6251" satisfied condition "success or failure"
Sep 17 17:36:03.795: INFO: Trying to get logs from node jerma-worker2 pod pod-031c98d3-99ae-422c-9ab1-1aa602db6251 container test-container: 
STEP: delete the pod
Sep 17 17:36:03.853: INFO: Waiting for pod pod-031c98d3-99ae-422c-9ab1-1aa602db6251 to disappear
Sep 17 17:36:03.869: INFO: Pod pod-031c98d3-99ae-422c-9ab1-1aa602db6251 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:36:03.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7376" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3020,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:36:03.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-8a40bebe-1b39-4be3-9d4a-f0c48506d70e
STEP: Creating a pod to test consume configMaps
Sep 17 17:36:03.978: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0" in namespace "projected-9858" to be "success or failure"
Sep 17 17:36:04.000: INFO: Pod "pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.025805ms
Sep 17 17:36:06.007: INFO: Pod "pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028925919s
Sep 17 17:36:08.014: INFO: Pod "pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035714423s
STEP: Saw pod success
Sep 17 17:36:08.014: INFO: Pod "pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0" satisfied condition "success or failure"
Sep 17 17:36:08.018: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 17 17:36:08.054: INFO: Waiting for pod pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0 to disappear
Sep 17 17:36:08.068: INFO: Pod pod-projected-configmaps-86445282-113e-4aa6-a20f-4a96020f76f0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:36:08.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9858" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3025,"failed":0}

------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:36:08.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Sep 17 17:36:08.786: INFO: Pod name wrapped-volume-race-a91bb512-fbe4-42aa-a551-48e79f17626c: Found 0 pods out of 5
Sep 17 17:36:13.856: INFO: Pod name wrapped-volume-race-a91bb512-fbe4-42aa-a551-48e79f17626c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a91bb512-fbe4-42aa-a551-48e79f17626c in namespace emptydir-wrapper-9802, will wait for the garbage collector to delete the pods
Sep 17 17:36:27.983: INFO: Deleting ReplicationController wrapped-volume-race-a91bb512-fbe4-42aa-a551-48e79f17626c took: 9.294454ms
Sep 17 17:36:28.384: INFO: Terminating ReplicationController wrapped-volume-race-a91bb512-fbe4-42aa-a551-48e79f17626c pods took: 401.035998ms
STEP: Creating RC which spawns configmap-volume pods
Sep 17 17:36:38.830: INFO: Pod name wrapped-volume-race-2a8a9146-6f4d-44bc-b821-d08c5c6787a4: Found 0 pods out of 5
Sep 17 17:36:43.843: INFO: Pod name wrapped-volume-race-2a8a9146-6f4d-44bc-b821-d08c5c6787a4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2a8a9146-6f4d-44bc-b821-d08c5c6787a4 in namespace emptydir-wrapper-9802, will wait for the garbage collector to delete the pods
Sep 17 17:36:57.951: INFO: Deleting ReplicationController wrapped-volume-race-2a8a9146-6f4d-44bc-b821-d08c5c6787a4 took: 16.800165ms
Sep 17 17:36:58.253: INFO: Terminating ReplicationController wrapped-volume-race-2a8a9146-6f4d-44bc-b821-d08c5c6787a4 pods took: 301.076612ms
STEP: Creating RC which spawns configmap-volume pods
Sep 17 17:37:08.023: INFO: Pod name wrapped-volume-race-81296fe3-6201-4fb8-8f92-d3bf2a3fc67d: Found 1 pods out of 5
Sep 17 17:37:13.040: INFO: Pod name wrapped-volume-race-81296fe3-6201-4fb8-8f92-d3bf2a3fc67d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-81296fe3-6201-4fb8-8f92-d3bf2a3fc67d in namespace emptydir-wrapper-9802, will wait for the garbage collector to delete the pods
Sep 17 17:37:27.204: INFO: Deleting ReplicationController wrapped-volume-race-81296fe3-6201-4fb8-8f92-d3bf2a3fc67d took: 7.783446ms
Sep 17 17:37:27.605: INFO: Terminating ReplicationController wrapped-volume-race-81296fe3-6201-4fb8-8f92-d3bf2a3fc67d pods took: 400.836054ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:37:39.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9802" for this suite.

• [SLOW TEST:91.172 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":189,"skipped":3025,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:37:39.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:37:39.338: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:37:46.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-722" for this suite.

• [SLOW TEST:6.938 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":190,"skipped":3037,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:37:46.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:38:01.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:38:03.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961081, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961081, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961081, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961081, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:38:06.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:38:06.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-270-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:38:08.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3919" for this suite.
STEP: Destroying namespace "webhook-3919-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.057 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":191,"skipped":3069,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:38:08.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 17 17:38:08.366: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 17 17:38:30.524: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostname&protocol=udp&host=10.244.1.110&port=8081&tries=1'] Namespace:pod-network-test-9055 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:38:30.525: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:38:30.630927       7 log.go:172] (0x88e9030) (0x88e91f0) Create stream
I0917 17:38:30.631147       7 log.go:172] (0x88e9030) (0x88e91f0) Stream added, broadcasting: 1
I0917 17:38:30.636808       7 log.go:172] (0x88e9030) Reply frame received for 1
I0917 17:38:30.637075       7 log.go:172] (0x88e9030) (0xa43c460) Create stream
I0917 17:38:30.637195       7 log.go:172] (0x88e9030) (0xa43c460) Stream added, broadcasting: 3
I0917 17:38:30.639223       7 log.go:172] (0x88e9030) Reply frame received for 3
I0917 17:38:30.639363       7 log.go:172] (0x88e9030) (0xa3a8620) Create stream
I0917 17:38:30.639431       7 log.go:172] (0x88e9030) (0xa3a8620) Stream added, broadcasting: 5
I0917 17:38:30.641005       7 log.go:172] (0x88e9030) Reply frame received for 5
I0917 17:38:30.703116       7 log.go:172] (0x88e9030) Data frame received for 3
I0917 17:38:30.703413       7 log.go:172] (0xa43c460) (3) Data frame handling
I0917 17:38:30.703630       7 log.go:172] (0x88e9030) Data frame received for 5
I0917 17:38:30.703781       7 log.go:172] (0xa3a8620) (5) Data frame handling
I0917 17:38:30.703913       7 log.go:172] (0xa43c460) (3) Data frame sent
I0917 17:38:30.704095       7 log.go:172] (0x88e9030) Data frame received for 3
I0917 17:38:30.704416       7 log.go:172] (0xa43c460) (3) Data frame handling
I0917 17:38:30.705528       7 log.go:172] (0x88e9030) Data frame received for 1
I0917 17:38:30.705657       7 log.go:172] (0x88e91f0) (1) Data frame handling
I0917 17:38:30.705778       7 log.go:172] (0x88e91f0) (1) Data frame sent
I0917 17:38:30.705947       7 log.go:172] (0x88e9030) (0x88e91f0) Stream removed, broadcasting: 1
I0917 17:38:30.706161       7 log.go:172] (0x88e9030) Go away received
I0917 17:38:30.706669       7 log.go:172] (0x88e9030) (0x88e91f0) Stream removed, broadcasting: 1
I0917 17:38:30.706854       7 log.go:172] (0x88e9030) (0xa43c460) Stream removed, broadcasting: 3
I0917 17:38:30.707015       7 log.go:172] (0x88e9030) (0xa3a8620) Stream removed, broadcasting: 5
Sep 17 17:38:30.707: INFO: Waiting for responses: map[]
Sep 17 17:38:30.713: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostname&protocol=udp&host=10.244.2.66&port=8081&tries=1'] Namespace:pod-network-test-9055 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:38:30.713: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:38:30.817392       7 log.go:172] (0x8131ea0) (0xa7fa150) Create stream
I0917 17:38:30.817507       7 log.go:172] (0x8131ea0) (0xa7fa150) Stream added, broadcasting: 1
I0917 17:38:30.823110       7 log.go:172] (0x8131ea0) Reply frame received for 1
I0917 17:38:30.823450       7 log.go:172] (0x8131ea0) (0xa4be380) Create stream
I0917 17:38:30.823640       7 log.go:172] (0x8131ea0) (0xa4be380) Stream added, broadcasting: 3
I0917 17:38:30.826388       7 log.go:172] (0x8131ea0) Reply frame received for 3
I0917 17:38:30.826515       7 log.go:172] (0x8131ea0) (0xa7fa3f0) Create stream
I0917 17:38:30.826584       7 log.go:172] (0x8131ea0) (0xa7fa3f0) Stream added, broadcasting: 5
I0917 17:38:30.827898       7 log.go:172] (0x8131ea0) Reply frame received for 5
I0917 17:38:30.881265       7 log.go:172] (0x8131ea0) Data frame received for 3
I0917 17:38:30.881490       7 log.go:172] (0xa4be380) (3) Data frame handling
I0917 17:38:30.881615       7 log.go:172] (0x8131ea0) Data frame received for 5
I0917 17:38:30.881767       7 log.go:172] (0xa7fa3f0) (5) Data frame handling
I0917 17:38:30.881909       7 log.go:172] (0xa4be380) (3) Data frame sent
I0917 17:38:30.882018       7 log.go:172] (0x8131ea0) Data frame received for 3
I0917 17:38:30.882131       7 log.go:172] (0xa4be380) (3) Data frame handling
I0917 17:38:30.882982       7 log.go:172] (0x8131ea0) Data frame received for 1
I0917 17:38:30.883123       7 log.go:172] (0xa7fa150) (1) Data frame handling
I0917 17:38:30.883354       7 log.go:172] (0xa7fa150) (1) Data frame sent
I0917 17:38:30.883468       7 log.go:172] (0x8131ea0) (0xa7fa150) Stream removed, broadcasting: 1
I0917 17:38:30.883591       7 log.go:172] (0x8131ea0) Go away received
I0917 17:38:30.883879       7 log.go:172] (0x8131ea0) (0xa7fa150) Stream removed, broadcasting: 1
I0917 17:38:30.883968       7 log.go:172] (0x8131ea0) (0xa4be380) Stream removed, broadcasting: 3
I0917 17:38:30.884055       7 log.go:172] (0x8131ea0) (0xa7fa3f0) Stream removed, broadcasting: 5
Sep 17 17:38:30.884: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:38:30.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9055" for this suite.

• [SLOW TEST:22.642 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3075,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:38:30.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:38:46.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:38:48.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961126, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961126, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961126, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:38:51.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:38:51.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:38:53.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7226" for this suite.
STEP: Destroying namespace "webhook-7226-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.351 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":193,"skipped":3093,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:38:53.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-d82e7003-c9af-428c-b381-0ec769608c33
STEP: Creating a pod to test consume configMaps
Sep 17 17:38:53.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974" in namespace "configmap-2463" to be "success or failure"
Sep 17 17:38:53.385: INFO: Pod "pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974": Phase="Pending", Reason="", readiness=false. Elapsed: 20.576043ms
Sep 17 17:38:55.392: INFO: Pod "pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027435161s
Sep 17 17:38:57.398: INFO: Pod "pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033707589s
STEP: Saw pod success
Sep 17 17:38:57.398: INFO: Pod "pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974" satisfied condition "success or failure"
Sep 17 17:38:57.403: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974 container configmap-volume-test: 
STEP: delete the pod
Sep 17 17:38:57.470: INFO: Waiting for pod pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974 to disappear
Sep 17 17:38:57.562: INFO: Pod pod-configmaps-8fb5115b-28a4-48e3-b95b-c11945186974 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:38:57.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2463" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3118,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:38:57.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:39:13.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5135" for this suite.

• [SLOW TEST:16.159 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":195,"skipped":3129,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:39:13.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Sep 17 17:39:13.830: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix795030397/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:39:14.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2364" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":196,"skipped":3150,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:39:14.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:39:14.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17" in namespace "downward-api-2431" to be "success or failure"
Sep 17 17:39:14.905: INFO: Pod "downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17": Phase="Pending", Reason="", readiness=false. Elapsed: 72.802888ms
Sep 17 17:39:16.975: INFO: Pod "downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142784516s
Sep 17 17:39:19.083: INFO: Pod "downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.24989249s
STEP: Saw pod success
Sep 17 17:39:19.083: INFO: Pod "downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17" satisfied condition "success or failure"
Sep 17 17:39:19.087: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17 container client-container: 
STEP: delete the pod
Sep 17 17:39:19.220: INFO: Waiting for pod downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17 to disappear
Sep 17 17:39:19.224: INFO: Pod downwardapi-volume-82912924-e2cd-4fe0-9adc-a8a6c8215d17 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:39:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2431" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:39:19.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 17 17:39:27.472: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:27.515: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:29.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:29.545: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:31.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:31.522: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:33.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:33.522: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:35.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:35.522: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:37.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:37.525: INFO: Pod pod-with-poststart-http-hook still exists
Sep 17 17:39:39.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 17 17:39:39.522: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:39:39.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-683" for this suite.

• [SLOW TEST:20.254 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3222,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:39:39.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:39:47.210: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:39:49.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961187, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961187, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961187, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961187, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:39:52.265: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:39:52.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-523-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:39:53.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3196" for this suite.
STEP: Destroying namespace "webhook-3196-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.079 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":199,"skipped":3228,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:39:53.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-486
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-486 to expose endpoints map[]
Sep 17 17:39:53.773: INFO: successfully validated that service multi-endpoint-test in namespace services-486 exposes endpoints map[] (33.573509ms elapsed)
STEP: Creating pod pod1 in namespace services-486
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-486 to expose endpoints map[pod1:[100]]
Sep 17 17:39:57.968: INFO: successfully validated that service multi-endpoint-test in namespace services-486 exposes endpoints map[pod1:[100]] (4.184796456s elapsed)
STEP: Creating pod pod2 in namespace services-486
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-486 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 17 17:40:01.058: INFO: successfully validated that service multi-endpoint-test in namespace services-486 exposes endpoints map[pod1:[100] pod2:[101]] (3.084305585s elapsed)
STEP: Deleting pod pod1 in namespace services-486
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-486 to expose endpoints map[pod2:[101]]
Sep 17 17:40:01.082: INFO: successfully validated that service multi-endpoint-test in namespace services-486 exposes endpoints map[pod2:[101]] (17.041168ms elapsed)
STEP: Deleting pod pod2 in namespace services-486
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-486 to expose endpoints map[]
Sep 17 17:40:01.103: INFO: successfully validated that service multi-endpoint-test in namespace services-486 exposes endpoints map[] (16.150567ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:40:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-486" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:7.530 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":200,"skipped":3268,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:40:01.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 17 17:40:13.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:13.696: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:15.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:15.704: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:17.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:17.703: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:19.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:19.704: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:21.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:21.703: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:23.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:23.703: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:25.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:25.704: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:27.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:27.704: INFO: Pod pod-with-prestop-http-hook still exists
Sep 17 17:40:29.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 17 17:40:29.703: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:40:29.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-302" for this suite.

• [SLOW TEST:28.573 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:40:29.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Sep 17 17:40:29.806: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:40:39.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8172" for this suite.

• [SLOW TEST:9.695 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":202,"skipped":3306,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:40:39.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:40:49.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:40:51.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961249, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961249, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961249, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961249, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:40:54.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:40:54.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-24" for this suite.
STEP: Destroying namespace "webhook-24-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.121 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":203,"skipped":3322,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:40:54.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:40:54.657: INFO: Creating ReplicaSet my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b
Sep 17 17:40:54.670: INFO: Pod name my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b: Found 0 pods out of 1
Sep 17 17:40:59.676: INFO: Pod name my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b: Found 1 pods out of 1
Sep 17 17:40:59.676: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b" is running
Sep 17 17:40:59.681: INFO: Pod "my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b-cvj7z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 17:40:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 17:40:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 17:40:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-17 17:40:54 +0000 UTC Reason: Message:}])
Sep 17 17:40:59.681: INFO: Trying to dial the pod
Sep 17 17:41:04.698: INFO: Controller my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b: Got expected result from replica 1 [my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b-cvj7z]: "my-hostname-basic-17363126-ac3f-43bf-9885-3643f983fd1b-cvj7z", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:41:04.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-595" for this suite.

• [SLOW TEST:10.165 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":204,"skipped":3327,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:41:04.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Sep 17 17:41:04.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6644 -- logs-generator --log-lines-total 100 --run-duration 20s'
Sep 17 17:41:05.953: INFO: stderr: ""
Sep 17 17:41:05.954: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Sep 17 17:41:05.954: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Sep 17 17:41:05.954: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6644" to be "running and ready, or succeeded"
Sep 17 17:41:05.960: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996899ms
Sep 17 17:41:07.967: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011917761s
Sep 17 17:41:09.986: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.031610769s
Sep 17 17:41:09.987: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Sep 17 17:41:09.987: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Sep 17 17:41:09.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644'
Sep 17 17:41:11.133: INFO: stderr: ""
Sep 17 17:41:11.133: INFO: stdout: "I0917 17:41:08.369663       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/tlz 221\nI0917 17:41:08.569838       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/krc 272\nI0917 17:41:08.769879       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/8ghs 216\nI0917 17:41:08.969860       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zzv 549\nI0917 17:41:09.169855       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/sql 405\nI0917 17:41:09.369849       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/vzw 349\nI0917 17:41:09.569874       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/wq7d 332\nI0917 17:41:09.769824       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/g8bq 202\nI0917 17:41:09.969917       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/c8g6 268\nI0917 17:41:10.169909       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/f7d6 340\nI0917 17:41:10.369835       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/j8d 441\nI0917 17:41:10.569859       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/nxh 281\nI0917 17:41:10.769846       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/vcfk 307\nI0917 17:41:10.969825       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/j9f 381\n"
STEP: limiting log lines
Sep 17 17:41:11.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644 --tail=1'
Sep 17 17:41:12.255: INFO: stderr: ""
Sep 17 17:41:12.256: INFO: stdout: "I0917 17:41:12.169847       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/hb5 285\n"
Sep 17 17:41:12.256: INFO: got output "I0917 17:41:12.169847       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/hb5 285\n"
STEP: limiting log bytes
Sep 17 17:41:12.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644 --limit-bytes=1'
Sep 17 17:41:13.401: INFO: stderr: ""
Sep 17 17:41:13.402: INFO: stdout: "I"
Sep 17 17:41:13.402: INFO: got output "I"
STEP: exposing timestamps
Sep 17 17:41:13.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644 --tail=1 --timestamps'
Sep 17 17:41:14.559: INFO: stderr: ""
Sep 17 17:41:14.560: INFO: stdout: "2020-09-17T17:41:14.370025916Z I0917 17:41:14.369832       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/d8m 420\n"
Sep 17 17:41:14.560: INFO: got output "2020-09-17T17:41:14.370025916Z I0917 17:41:14.369832       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/d8m 420\n"
STEP: restricting to a time range
Sep 17 17:41:17.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644 --since=1s'
Sep 17 17:41:18.205: INFO: stderr: ""
Sep 17 17:41:18.205: INFO: stdout: "I0917 17:41:17.369877       1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/587 357\nI0917 17:41:17.569826       1 logs_generator.go:76] 46 GET /api/v1/namespaces/kube-system/pods/g9c 245\nI0917 17:41:17.769907       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/kube-system/pods/f4hh 533\nI0917 17:41:17.969822       1 logs_generator.go:76] 48 POST /api/v1/namespaces/ns/pods/489 431\nI0917 17:41:18.169829       1 logs_generator.go:76] 49 GET /api/v1/namespaces/default/pods/sz5 357\n"
Sep 17 17:41:18.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6644 --since=24h'
Sep 17 17:41:19.322: INFO: stderr: ""
Sep 17 17:41:19.322: INFO: stdout: "I0917 17:41:08.369663       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/tlz 221\nI0917 17:41:08.569838       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/krc 272\nI0917 17:41:08.769879       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/8ghs 216\nI0917 17:41:08.969860       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zzv 549\nI0917 17:41:09.169855       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/sql 405\nI0917 17:41:09.369849       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/vzw 349\nI0917 17:41:09.569874       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/wq7d 332\nI0917 17:41:09.769824       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/g8bq 202\nI0917 17:41:09.969917       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/c8g6 268\nI0917 17:41:10.169909       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/f7d6 340\nI0917 17:41:10.369835       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/j8d 441\nI0917 17:41:10.569859       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/nxh 281\nI0917 17:41:10.769846       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/vcfk 307\nI0917 17:41:10.969825       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/j9f 381\nI0917 17:41:11.169867       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/hr9h 455\nI0917 17:41:11.369830       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/8kz 456\nI0917 17:41:11.569839       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/gqm 385\nI0917 17:41:11.769840       1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/5vr 544\nI0917 17:41:11.969822       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/lfv 518\nI0917 17:41:12.169847       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/hb5 285\nI0917 17:41:12.369846       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/pb6t 544\nI0917 17:41:12.569871       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/stfx 211\nI0917 17:41:12.769852       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/nfq 398\nI0917 17:41:12.969803       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/9bg 222\nI0917 17:41:13.169826       1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/vzbc 269\nI0917 17:41:13.369880       1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/grn 332\nI0917 17:41:13.569832       1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/hstt 412\nI0917 17:41:13.769824       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/bqq 389\nI0917 17:41:13.969863       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/wks 357\nI0917 17:41:14.169878       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/2hg 552\nI0917 17:41:14.369832       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/d8m 420\nI0917 17:41:14.569860       1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/4pb8 372\nI0917 17:41:14.769877       1 logs_generator.go:76] 32 PUT /api/v1/namespaces/kube-system/pods/bv4x 518\nI0917 17:41:14.969920       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/kube-system/pods/c4c 384\nI0917 17:41:15.169864       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/hj29 246\nI0917 17:41:15.369901       1 logs_generator.go:76] 35 POST /api/v1/namespaces/kube-system/pods/4zj 326\nI0917 17:41:15.569904       1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/9tp 589\nI0917 17:41:15.769882       1 logs_generator.go:76] 37 POST /api/v1/namespaces/default/pods/kzl 549\nI0917 17:41:15.969888       1 logs_generator.go:76] 38 POST /api/v1/namespaces/kube-system/pods/rrmt 281\nI0917 17:41:16.169955       1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/q4vn 380\nI0917 17:41:16.369937       1 logs_generator.go:76] 40 GET /api/v1/namespaces/default/pods/knj 390\nI0917 17:41:16.569816       1 logs_generator.go:76] 41 GET /api/v1/namespaces/kube-system/pods/ljk 398\nI0917 17:41:16.769910       1 logs_generator.go:76] 42 PUT /api/v1/namespaces/kube-system/pods/4bxh 517\nI0917 17:41:16.969887       1 logs_generator.go:76] 43 PUT /api/v1/namespaces/ns/pods/jm7 462\nI0917 17:41:17.169883       1 logs_generator.go:76] 44 PUT /api/v1/namespaces/kube-system/pods/72xc 388\nI0917 17:41:17.369877       1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/587 357\nI0917 17:41:17.569826       1 logs_generator.go:76] 46 GET /api/v1/namespaces/kube-system/pods/g9c 245\nI0917 17:41:17.769907       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/kube-system/pods/f4hh 533\nI0917 17:41:17.969822       1 logs_generator.go:76] 48 POST /api/v1/namespaces/ns/pods/489 431\nI0917 17:41:18.169829       1 logs_generator.go:76] 49 GET /api/v1/namespaces/default/pods/sz5 357\nI0917 17:41:18.369848       1 logs_generator.go:76] 50 PUT /api/v1/namespaces/ns/pods/lmcb 200\nI0917 17:41:18.569799       1 logs_generator.go:76] 51 GET /api/v1/namespaces/kube-system/pods/pnz4 588\nI0917 17:41:18.769841       1 logs_generator.go:76] 52 POST /api/v1/namespaces/kube-system/pods/qhg6 417\nI0917 17:41:18.969843       1 logs_generator.go:76] 53 POST /api/v1/namespaces/ns/pods/kjs4 248\nI0917 17:41:19.169861       1 logs_generator.go:76] 54 GET /api/v1/namespaces/ns/pods/t9q 512\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Sep 17 17:41:19.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6644'
Sep 17 17:41:27.771: INFO: stderr: ""
Sep 17 17:41:27.771: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:41:27.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6644" for this suite.

• [SLOW TEST:23.070 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":205,"skipped":3330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:41:27.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Sep 17 17:41:32.420: INFO: Successfully updated pod "labelsupdate94b2b720-58c6-4ea5-98cc-72788f602a20"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:41:34.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7853" for this suite.

• [SLOW TEST:6.703 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3367,"failed":0}
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:41:34.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:41:34.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5269" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":207,"skipped":3367,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:41:34.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:41:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5022" for this suite.

• [SLOW TEST:16.339 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":208,"skipped":3381,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:41:50.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:42:02.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3923" for this suite.

• [SLOW TEST:11.219 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":209,"skipped":3385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:42:02.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:42:02.253: INFO: Creating deployment "test-recreate-deployment"
Sep 17 17:42:02.259: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep 17 17:42:02.299: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Sep 17 17:42:04.313: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep 17 17:42:04.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961322, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961322, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961322, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961322, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 17 17:42:06.325: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep 17 17:42:06.337: INFO: Updating deployment test-recreate-deployment
Sep 17 17:42:06.337: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Sep 17 17:42:06.881: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-4068 /apis/apps/v1/namespaces/deployment-4068/deployments/test-recreate-deployment c952312b-eca0-40da-a151-9c49b1c11715 1085929 2 2020-09-17 17:42:02 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9d10f08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-17 17:42:06 +0000 UTC,LastTransitionTime:2020-09-17 17:42:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-09-17 17:42:06 +0000 UTC,LastTransitionTime:2020-09-17 17:42:02 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Sep 17 17:42:06.941: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-4068 /apis/apps/v1/namespaces/deployment-4068/replicasets/test-recreate-deployment-5f94c574ff b1f9319c-4a6a-49e7-b646-dce5b56be8b7 1085926 1 2020-09-17 17:42:06 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c952312b-eca0-40da-a151-9c49b1c11715 0x8d2a447 0x8d2a448}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8d2a4a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:42:06.941: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep 17 17:42:06.943: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-4068 /apis/apps/v1/namespaces/deployment-4068/replicasets/test-recreate-deployment-799c574856 665acc6d-6d78-480a-b143-9de9e54f0722 1085918 2 2020-09-17 17:42:02 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c952312b-eca0-40da-a151-9c49b1c11715 0x8d2a547 0x8d2a548}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8d2a608  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:42:06.994: INFO: Pod "test-recreate-deployment-5f94c574ff-tv5ws" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tv5ws test-recreate-deployment-5f94c574ff- deployment-4068 /api/v1/namespaces/deployment-4068/pods/test-recreate-deployment-5f94c574ff-tv5ws 0d25f5da-f1ed-4bb6-857d-9b7418378013 1085931 0 2020-09-17 17:42:06 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff b1f9319c-4a6a-49e7-b646-dce5b56be8b7 0xa4cdfa7 0xa4cdfa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn8f4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn8f4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn8f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:42:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:42:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:42:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:42:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-17 17:42:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:42:06.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4068" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":210,"skipped":3411,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:42:07.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Sep 17 17:42:14.426: INFO: 0 pods remaining
Sep 17 17:42:14.427: INFO: 0 pods has nil DeletionTimestamp
Sep 17 17:42:14.427: INFO: 
Sep 17 17:42:15.325: INFO: 0 pods remaining
Sep 17 17:42:15.325: INFO: 0 pods has nil DeletionTimestamp
Sep 17 17:42:15.325: INFO: 
STEP: Gathering metrics
W0917 17:42:16.571497       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 17 17:42:16.571: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:42:16.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7183" for this suite.

• [SLOW TEST:9.594 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":211,"skipped":3438,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:42:16.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:42:27.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:42:29.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961347, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961347, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961347, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961347, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:42:32.274: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:42:32.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4585" for this suite.
STEP: Destroying namespace "webhook-4585-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.878 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":212,"skipped":3448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:42:32.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:42:36.677: INFO: Waiting up to 5m0s for pod "client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa" in namespace "pods-2787" to be "success or failure"
Sep 17 17:42:36.720: INFO: Pod "client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa": Phase="Pending", Reason="", readiness=false. Elapsed: 42.515866ms
Sep 17 17:42:38.746: INFO: Pod "client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069181132s
Sep 17 17:42:40.752: INFO: Pod "client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075028619s
STEP: Saw pod success
Sep 17 17:42:40.752: INFO: Pod "client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa" satisfied condition "success or failure"
Sep 17 17:42:40.756: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa container env3cont: 
STEP: delete the pod
Sep 17 17:42:40.778: INFO: Waiting for pod client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa to disappear
Sep 17 17:42:40.783: INFO: Pod client-envvars-1d8989be-2c3b-408e-a621-0d8c7db54daa no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:42:40.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2787" for this suite.

• [SLOW TEST:8.296 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3489,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:42:40.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6612;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6612;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6612.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6612.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6612.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6612.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.19.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.19.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.19.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.19.217_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6612;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6612;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6612.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6612.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6612.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6612.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6612.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6612.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6612.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6612.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.19.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.19.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.19.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.19.217_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 17 17:42:47.091: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.096: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.100: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.105: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.116: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.120: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.145: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.149: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.153: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.161: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.170: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.175: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:47.202: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:42:52.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.215: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.220: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.224: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.235: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.240: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.270: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.274: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.279: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.284: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.289: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.292: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.299: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:52.320: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:42:57.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.215: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.230: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.238: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.268: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.272: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.277: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.281: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.285: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.294: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.299: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:42:57.322: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:43:02.211: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.233: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.238: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.243: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.248: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.257: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.279: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.308: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.311: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.314: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.317: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.320: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.323: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.327: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.331: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:02.354: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:43:07.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.217: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.238: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.243: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.267: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.271: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.275: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.283: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.288: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.292: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.297: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:07.321: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:43:12.210: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.216: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.221: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.226: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.235: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.240: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.277: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.282: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.285: INFO: Unable to read jessie_udp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.288: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612 from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.292: INFO: Unable to read jessie_udp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.297: INFO: Unable to read jessie_tcp@dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.310: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc from pod dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc: the server could not find the requested resource (get pods dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc)
Sep 17 17:43:12.342: INFO: Lookups using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6612 wheezy_tcp@dns-test-service.dns-6612 wheezy_udp@dns-test-service.dns-6612.svc wheezy_tcp@dns-test-service.dns-6612.svc wheezy_udp@_http._tcp.dns-test-service.dns-6612.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6612.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6612 jessie_tcp@dns-test-service.dns-6612 jessie_udp@dns-test-service.dns-6612.svc jessie_tcp@dns-test-service.dns-6612.svc jessie_udp@_http._tcp.dns-test-service.dns-6612.svc jessie_tcp@_http._tcp.dns-test-service.dns-6612.svc]

Sep 17 17:43:17.328: INFO: DNS probes using dns-6612/dns-test-d96a596a-f074-47ce-b2e5-8b8264ad8fcc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:43:18.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6612" for this suite.

• [SLOW TEST:37.418 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":214,"skipped":3508,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:43:18.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0917 17:43:58.949984       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 17 17:43:58.950: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:43:58.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8317" for this suite.

• [SLOW TEST:40.743 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":215,"skipped":3525,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:43:58.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 17 17:44:03.103: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:44:03.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4841" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3564,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:44:03.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Sep 17 17:44:03.244: INFO: >>> kubeConfig: /root/.kube/config
Sep 17 17:44:21.284: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:16.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4942" for this suite.

• [SLOW TEST:73.438 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":217,"skipped":3568,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:16.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:45:16.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd" in namespace "downward-api-2585" to be "success or failure"
Sep 17 17:45:16.743: INFO: Pod "downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.407621ms
Sep 17 17:45:18.750: INFO: Pod "downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030686474s
Sep 17 17:45:20.757: INFO: Pod "downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038002212s
STEP: Saw pod success
Sep 17 17:45:20.758: INFO: Pod "downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd" satisfied condition "success or failure"
Sep 17 17:45:20.762: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd container client-container: 
STEP: delete the pod
Sep 17 17:45:20.796: INFO: Waiting for pod downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd to disappear
Sep 17 17:45:20.801: INFO: Pod downwardapi-volume-c464d9fc-4650-44ca-8899-1a4e93b1ccdd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2585" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3578,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:20.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 17:45:20.906: INFO: Waiting up to 5m0s for pod "pod-bd90b12f-a36f-4ee2-854d-822d95b42729" in namespace "emptydir-5299" to be "success or failure"
Sep 17 17:45:20.956: INFO: Pod "pod-bd90b12f-a36f-4ee2-854d-822d95b42729": Phase="Pending", Reason="", readiness=false. Elapsed: 49.022197ms
Sep 17 17:45:22.963: INFO: Pod "pod-bd90b12f-a36f-4ee2-854d-822d95b42729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056433851s
Sep 17 17:45:24.970: INFO: Pod "pod-bd90b12f-a36f-4ee2-854d-822d95b42729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063104784s
STEP: Saw pod success
Sep 17 17:45:24.970: INFO: Pod "pod-bd90b12f-a36f-4ee2-854d-822d95b42729" satisfied condition "success or failure"
Sep 17 17:45:24.975: INFO: Trying to get logs from node jerma-worker2 pod pod-bd90b12f-a36f-4ee2-854d-822d95b42729 container test-container: 
STEP: delete the pod
Sep 17 17:45:25.028: INFO: Waiting for pod pod-bd90b12f-a36f-4ee2-854d-822d95b42729 to disappear
Sep 17 17:45:25.034: INFO: Pod pod-bd90b12f-a36f-4ee2-854d-822d95b42729 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:25.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5299" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3579,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:25.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-8h46
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 17:45:25.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8h46" in namespace "subpath-5465" to be "success or failure"
Sep 17 17:45:25.178: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231571ms
Sep 17 17:45:27.185: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016190732s
Sep 17 17:45:29.191: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 4.022199223s
Sep 17 17:45:31.215: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 6.04612002s
Sep 17 17:45:33.221: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 8.052631188s
Sep 17 17:45:35.228: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 10.058971942s
Sep 17 17:45:37.234: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 12.065263387s
Sep 17 17:45:39.241: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 14.072610446s
Sep 17 17:45:41.248: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 16.079425536s
Sep 17 17:45:43.258: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 18.089237012s
Sep 17 17:45:45.265: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 20.095776478s
Sep 17 17:45:47.272: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Running", Reason="", readiness=true. Elapsed: 22.103059922s
Sep 17 17:45:49.279: INFO: Pod "pod-subpath-test-secret-8h46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.109841201s
STEP: Saw pod success
Sep 17 17:45:49.279: INFO: Pod "pod-subpath-test-secret-8h46" satisfied condition "success or failure"
Sep 17 17:45:49.284: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-8h46 container test-container-subpath-secret-8h46: 
STEP: delete the pod
Sep 17 17:45:49.319: INFO: Waiting for pod pod-subpath-test-secret-8h46 to disappear
Sep 17 17:45:49.335: INFO: Pod pod-subpath-test-secret-8h46 no longer exists
STEP: Deleting pod pod-subpath-test-secret-8h46
Sep 17 17:45:49.335: INFO: Deleting pod "pod-subpath-test-secret-8h46" in namespace "subpath-5465"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:49.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5465" for this suite.

• [SLOW TEST:24.307 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":220,"skipped":3587,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:49.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:45:49.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4" in namespace "downward-api-9294" to be "success or failure"
Sep 17 17:45:49.496: INFO: Pod "downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.481947ms
Sep 17 17:45:51.503: INFO: Pod "downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034020927s
Sep 17 17:45:53.510: INFO: Pod "downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041522943s
STEP: Saw pod success
Sep 17 17:45:53.511: INFO: Pod "downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4" satisfied condition "success or failure"
Sep 17 17:45:53.516: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4 container client-container: 
STEP: delete the pod
Sep 17 17:45:53.612: INFO: Waiting for pod downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4 to disappear
Sep 17 17:45:53.616: INFO: Pod downwardapi-volume-a49fa86c-1277-4356-bc39-64e8a9f4a1a4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:53.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9294" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:53.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:45:53.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3709" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":222,"skipped":3616,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:45:53.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 17 17:46:01.918: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 17 17:46:01.922: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 17 17:46:03.923: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 17 17:46:03.928: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 17 17:46:05.923: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 17 17:46:06.016: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 17 17:46:07.923: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 17 17:46:07.930: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:46:07.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2227" for this suite.

• [SLOW TEST:14.189 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:46:07.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Sep 17 17:46:12.079: INFO: Pod pod-hostip-b797b439-1d18-43e6-8a5c-1d6760aaf33f has hostIP: 172.18.0.10
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:46:12.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8778" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3642,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:46:12.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:46:12.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Sep 17 17:46:30.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2454 create -f -'
Sep 17 17:46:34.580: INFO: stderr: ""
Sep 17 17:46:34.581: INFO: stdout: "e2e-test-crd-publish-openapi-1619-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Sep 17 17:46:34.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2454 delete e2e-test-crd-publish-openapi-1619-crds test-cr'
Sep 17 17:46:35.705: INFO: stderr: ""
Sep 17 17:46:35.705: INFO: stdout: "e2e-test-crd-publish-openapi-1619-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Sep 17 17:46:35.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2454 apply -f -'
Sep 17 17:46:37.195: INFO: stderr: ""
Sep 17 17:46:37.195: INFO: stdout: "e2e-test-crd-publish-openapi-1619-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Sep 17 17:46:37.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2454 delete e2e-test-crd-publish-openapi-1619-crds test-cr'
Sep 17 17:46:38.306: INFO: stderr: ""
Sep 17 17:46:38.306: INFO: stdout: "e2e-test-crd-publish-openapi-1619-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Sep 17 17:46:38.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1619-crds'
Sep 17 17:46:39.733: INFO: stderr: ""
Sep 17 17:46:39.733: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1619-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:46:57.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2454" for this suite.

• [SLOW TEST:45.599 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":225,"skipped":3689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:46:57.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:46:57.776: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:46:57.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53" in namespace "projected-9389" to be "success or failure"
Sep 17 17:46:58.013: INFO: Pod "downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53": Phase="Pending", Reason="", readiness=false. Elapsed: 28.389857ms
Sep 17 17:47:00.327: INFO: Pod "downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342268134s
Sep 17 17:47:02.334: INFO: Pod "downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.349265905s
STEP: Saw pod success
Sep 17 17:47:02.334: INFO: Pod "downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53" satisfied condition "success or failure"
Sep 17 17:47:02.339: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53 container client-container: 
STEP: delete the pod
Sep 17 17:47:02.374: INFO: Waiting for pod downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53 to disappear
Sep 17 17:47:02.390: INFO: Pod downwardapi-volume-5eb3b031-9a4f-49e1-91a0-2d816fd37a53 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:47:02.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9389" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3756,"failed":0}

------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:47:02.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-cd92cc09-55a5-4839-a412-9da14daff12a in namespace container-probe-3158
Sep 17 17:47:06.511: INFO: Started pod liveness-cd92cc09-55a5-4839-a412-9da14daff12a in namespace container-probe-3158
STEP: checking the pod's current state and verifying that restartCount is present
Sep 17 17:47:06.517: INFO: Initial restart count of pod liveness-cd92cc09-55a5-4839-a412-9da14daff12a is 0
Sep 17 17:47:26.586: INFO: Restart count of pod container-probe-3158/liveness-cd92cc09-55a5-4839-a412-9da14daff12a is now 1 (20.069279112s elapsed)
Sep 17 17:47:46.653: INFO: Restart count of pod container-probe-3158/liveness-cd92cc09-55a5-4839-a412-9da14daff12a is now 2 (40.136336261s elapsed)
Sep 17 17:48:06.741: INFO: Restart count of pod container-probe-3158/liveness-cd92cc09-55a5-4839-a412-9da14daff12a is now 3 (1m0.224587684s elapsed)
Sep 17 17:48:26.809: INFO: Restart count of pod container-probe-3158/liveness-cd92cc09-55a5-4839-a412-9da14daff12a is now 4 (1m20.292025913s elapsed)
Sep 17 17:49:33.027: INFO: Restart count of pod container-probe-3158/liveness-cd92cc09-55a5-4839-a412-9da14daff12a is now 5 (2m26.510184517s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:49:33.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3158" for this suite.

• [SLOW TEST:150.677 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3756,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:49:33.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Sep 17 17:49:45.222: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:45.222: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:45.328724       7 log.go:172] (0xb6da1c0) (0xb6da230) Create stream
I0917 17:49:45.328857       7 log.go:172] (0xb6da1c0) (0xb6da230) Stream added, broadcasting: 1
I0917 17:49:45.333830       7 log.go:172] (0xb6da1c0) Reply frame received for 1
I0917 17:49:45.334135       7 log.go:172] (0xb6da1c0) (0xb56a070) Create stream
I0917 17:49:45.334304       7 log.go:172] (0xb6da1c0) (0xb56a070) Stream added, broadcasting: 3
I0917 17:49:45.336745       7 log.go:172] (0xb6da1c0) Reply frame received for 3
I0917 17:49:45.336955       7 log.go:172] (0xb6da1c0) (0xb6da460) Create stream
I0917 17:49:45.337087       7 log.go:172] (0xb6da1c0) (0xb6da460) Stream added, broadcasting: 5
I0917 17:49:45.338763       7 log.go:172] (0xb6da1c0) Reply frame received for 5
I0917 17:49:45.428601       7 log.go:172] (0xb6da1c0) Data frame received for 3
I0917 17:49:45.428827       7 log.go:172] (0xb56a070) (3) Data frame handling
I0917 17:49:45.428956       7 log.go:172] (0xb6da1c0) Data frame received for 5
I0917 17:49:45.429147       7 log.go:172] (0xb6da460) (5) Data frame handling
I0917 17:49:45.429251       7 log.go:172] (0xb56a070) (3) Data frame sent
I0917 17:49:45.429385       7 log.go:172] (0xb6da1c0) Data frame received for 3
I0917 17:49:45.429495       7 log.go:172] (0xb56a070) (3) Data frame handling
I0917 17:49:45.430119       7 log.go:172] (0xb6da1c0) Data frame received for 1
I0917 17:49:45.430284       7 log.go:172] (0xb6da230) (1) Data frame handling
I0917 17:49:45.430456       7 log.go:172] (0xb6da230) (1) Data frame sent
I0917 17:49:45.430606       7 log.go:172] (0xb6da1c0) (0xb6da230) Stream removed, broadcasting: 1
I0917 17:49:45.430793       7 log.go:172] (0xb6da1c0) Go away received
I0917 17:49:45.431224       7 log.go:172] (0xb6da1c0) (0xb6da230) Stream removed, broadcasting: 1
I0917 17:49:45.431388       7 log.go:172] (0xb6da1c0) (0xb56a070) Stream removed, broadcasting: 3
I0917 17:49:45.431471       7 log.go:172] (0xb6da1c0) (0xb6da460) Stream removed, broadcasting: 5
Sep 17 17:49:45.431: INFO: Exec stderr: ""
Sep 17 17:49:45.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:45.432: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:45.545396       7 log.go:172] (0xb406380) (0xb4063f0) Create stream
I0917 17:49:45.545511       7 log.go:172] (0xb406380) (0xb4063f0) Stream added, broadcasting: 1
I0917 17:49:45.549088       7 log.go:172] (0xb406380) Reply frame received for 1
I0917 17:49:45.549294       7 log.go:172] (0xb406380) (0xb56a460) Create stream
I0917 17:49:45.549426       7 log.go:172] (0xb406380) (0xb56a460) Stream added, broadcasting: 3
I0917 17:49:45.551291       7 log.go:172] (0xb406380) Reply frame received for 3
I0917 17:49:45.551558       7 log.go:172] (0xb406380) (0xb4065b0) Create stream
I0917 17:49:45.551682       7 log.go:172] (0xb406380) (0xb4065b0) Stream added, broadcasting: 5
I0917 17:49:45.553420       7 log.go:172] (0xb406380) Reply frame received for 5
I0917 17:49:45.613237       7 log.go:172] (0xb406380) Data frame received for 5
I0917 17:49:45.613461       7 log.go:172] (0xb4065b0) (5) Data frame handling
I0917 17:49:45.613611       7 log.go:172] (0xb406380) Data frame received for 3
I0917 17:49:45.613800       7 log.go:172] (0xb56a460) (3) Data frame handling
I0917 17:49:45.614014       7 log.go:172] (0xb56a460) (3) Data frame sent
I0917 17:49:45.614174       7 log.go:172] (0xb406380) Data frame received for 3
I0917 17:49:45.614297       7 log.go:172] (0xb56a460) (3) Data frame handling
I0917 17:49:45.614761       7 log.go:172] (0xb406380) Data frame received for 1
I0917 17:49:45.614914       7 log.go:172] (0xb4063f0) (1) Data frame handling
I0917 17:49:45.615031       7 log.go:172] (0xb4063f0) (1) Data frame sent
I0917 17:49:45.615165       7 log.go:172] (0xb406380) (0xb4063f0) Stream removed, broadcasting: 1
I0917 17:49:45.615329       7 log.go:172] (0xb406380) Go away received
I0917 17:49:45.615767       7 log.go:172] (0xb406380) (0xb4063f0) Stream removed, broadcasting: 1
I0917 17:49:45.615947       7 log.go:172] (0xb406380) (0xb56a460) Stream removed, broadcasting: 3
I0917 17:49:45.616119       7 log.go:172] (0xb406380) (0xb4065b0) Stream removed, broadcasting: 5
Sep 17 17:49:45.616: INFO: Exec stderr: ""
Sep 17 17:49:45.616: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:45.617: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:45.719402       7 log.go:172] (0xb406bd0) (0xb406c40) Create stream
I0917 17:49:45.719570       7 log.go:172] (0xb406bd0) (0xb406c40) Stream added, broadcasting: 1
I0917 17:49:45.723721       7 log.go:172] (0xb406bd0) Reply frame received for 1
I0917 17:49:45.723898       7 log.go:172] (0xb406bd0) (0xb406e00) Create stream
I0917 17:49:45.723988       7 log.go:172] (0xb406bd0) (0xb406e00) Stream added, broadcasting: 3
I0917 17:49:45.725425       7 log.go:172] (0xb406bd0) Reply frame received for 3
I0917 17:49:45.725530       7 log.go:172] (0xb406bd0) (0xb20c4d0) Create stream
I0917 17:49:45.725592       7 log.go:172] (0xb406bd0) (0xb20c4d0) Stream added, broadcasting: 5
I0917 17:49:45.726805       7 log.go:172] (0xb406bd0) Reply frame received for 5
I0917 17:49:45.788380       7 log.go:172] (0xb406bd0) Data frame received for 3
I0917 17:49:45.788672       7 log.go:172] (0xb406e00) (3) Data frame handling
I0917 17:49:45.788810       7 log.go:172] (0xb406bd0) Data frame received for 5
I0917 17:49:45.789081       7 log.go:172] (0xb20c4d0) (5) Data frame handling
I0917 17:49:45.789315       7 log.go:172] (0xb406e00) (3) Data frame sent
I0917 17:49:45.789526       7 log.go:172] (0xb406bd0) Data frame received for 3
I0917 17:49:45.789651       7 log.go:172] (0xb406e00) (3) Data frame handling
I0917 17:49:45.789769       7 log.go:172] (0xb406bd0) Data frame received for 1
I0917 17:49:45.789862       7 log.go:172] (0xb406c40) (1) Data frame handling
I0917 17:49:45.789957       7 log.go:172] (0xb406c40) (1) Data frame sent
I0917 17:49:45.790049       7 log.go:172] (0xb406bd0) (0xb406c40) Stream removed, broadcasting: 1
I0917 17:49:45.790162       7 log.go:172] (0xb406bd0) Go away received
I0917 17:49:45.790593       7 log.go:172] (0xb406bd0) (0xb406c40) Stream removed, broadcasting: 1
I0917 17:49:45.790728       7 log.go:172] (0xb406bd0) (0xb406e00) Stream removed, broadcasting: 3
I0917 17:49:45.790804       7 log.go:172] (0xb406bd0) (0xb20c4d0) Stream removed, broadcasting: 5
Sep 17 17:49:45.790: INFO: Exec stderr: ""
Sep 17 17:49:45.790: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:45.791: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:45.890269       7 log.go:172] (0xb20ca10) (0xb20ca80) Create stream
I0917 17:49:45.890436       7 log.go:172] (0xb20ca10) (0xb20ca80) Stream added, broadcasting: 1
I0917 17:49:45.893587       7 log.go:172] (0xb20ca10) Reply frame received for 1
I0917 17:49:45.893780       7 log.go:172] (0xb20ca10) (0xb6da9a0) Create stream
I0917 17:49:45.893880       7 log.go:172] (0xb20ca10) (0xb6da9a0) Stream added, broadcasting: 3
I0917 17:49:45.895351       7 log.go:172] (0xb20ca10) Reply frame received for 3
I0917 17:49:45.895473       7 log.go:172] (0xb20ca10) (0xb20ccb0) Create stream
I0917 17:49:45.895544       7 log.go:172] (0xb20ca10) (0xb20ccb0) Stream added, broadcasting: 5
I0917 17:49:45.896960       7 log.go:172] (0xb20ca10) Reply frame received for 5
I0917 17:49:45.972131       7 log.go:172] (0xb20ca10) Data frame received for 5
I0917 17:49:45.972367       7 log.go:172] (0xb20ccb0) (5) Data frame handling
I0917 17:49:45.972504       7 log.go:172] (0xb20ca10) Data frame received for 3
I0917 17:49:45.972658       7 log.go:172] (0xb6da9a0) (3) Data frame handling
I0917 17:49:45.972854       7 log.go:172] (0xb6da9a0) (3) Data frame sent
I0917 17:49:45.972978       7 log.go:172] (0xb20ca10) Data frame received for 3
I0917 17:49:45.973089       7 log.go:172] (0xb6da9a0) (3) Data frame handling
I0917 17:49:45.973954       7 log.go:172] (0xb20ca10) Data frame received for 1
I0917 17:49:45.974064       7 log.go:172] (0xb20ca80) (1) Data frame handling
I0917 17:49:45.974174       7 log.go:172] (0xb20ca80) (1) Data frame sent
I0917 17:49:45.974298       7 log.go:172] (0xb20ca10) (0xb20ca80) Stream removed, broadcasting: 1
I0917 17:49:45.974448       7 log.go:172] (0xb20ca10) Go away received
I0917 17:49:45.974859       7 log.go:172] (0xb20ca10) (0xb20ca80) Stream removed, broadcasting: 1
I0917 17:49:45.975023       7 log.go:172] (0xb20ca10) (0xb6da9a0) Stream removed, broadcasting: 3
I0917 17:49:45.975135       7 log.go:172] (0xb20ca10) (0xb20ccb0) Stream removed, broadcasting: 5
Sep 17 17:49:45.975: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Sep 17 17:49:45.975: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:45.975: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.083817       7 log.go:172] (0xb869810) (0xb8698f0) Create stream
I0917 17:49:46.083940       7 log.go:172] (0xb869810) (0xb8698f0) Stream added, broadcasting: 1
I0917 17:49:46.087439       7 log.go:172] (0xb869810) Reply frame received for 1
I0917 17:49:46.087649       7 log.go:172] (0xb869810) (0xb20ce70) Create stream
I0917 17:49:46.087760       7 log.go:172] (0xb869810) (0xb20ce70) Stream added, broadcasting: 3
I0917 17:49:46.089579       7 log.go:172] (0xb869810) Reply frame received for 3
I0917 17:49:46.089726       7 log.go:172] (0xb869810) (0xb869c70) Create stream
I0917 17:49:46.089819       7 log.go:172] (0xb869810) (0xb869c70) Stream added, broadcasting: 5
I0917 17:49:46.091381       7 log.go:172] (0xb869810) Reply frame received for 5
I0917 17:49:46.144112       7 log.go:172] (0xb869810) Data frame received for 3
I0917 17:49:46.144409       7 log.go:172] (0xb20ce70) (3) Data frame handling
I0917 17:49:46.144563       7 log.go:172] (0xb869810) Data frame received for 5
I0917 17:49:46.144768       7 log.go:172] (0xb869c70) (5) Data frame handling
I0917 17:49:46.145013       7 log.go:172] (0xb20ce70) (3) Data frame sent
I0917 17:49:46.145150       7 log.go:172] (0xb869810) Data frame received for 3
I0917 17:49:46.145260       7 log.go:172] (0xb20ce70) (3) Data frame handling
I0917 17:49:46.145653       7 log.go:172] (0xb869810) Data frame received for 1
I0917 17:49:46.145841       7 log.go:172] (0xb8698f0) (1) Data frame handling
I0917 17:49:46.146021       7 log.go:172] (0xb8698f0) (1) Data frame sent
I0917 17:49:46.146160       7 log.go:172] (0xb869810) (0xb8698f0) Stream removed, broadcasting: 1
I0917 17:49:46.146371       7 log.go:172] (0xb869810) Go away received
I0917 17:49:46.146758       7 log.go:172] (0xb869810) (0xb8698f0) Stream removed, broadcasting: 1
I0917 17:49:46.146936       7 log.go:172] (0xb869810) (0xb20ce70) Stream removed, broadcasting: 3
I0917 17:49:46.147085       7 log.go:172] (0xb869810) (0xb869c70) Stream removed, broadcasting: 5
Sep 17 17:49:46.147: INFO: Exec stderr: ""
Sep 17 17:49:46.147: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:46.147: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.262935       7 log.go:172] (0xb407730) (0xb4077a0) Create stream
I0917 17:49:46.263070       7 log.go:172] (0xb407730) (0xb4077a0) Stream added, broadcasting: 1
I0917 17:49:46.266000       7 log.go:172] (0xb407730) Reply frame received for 1
I0917 17:49:46.266207       7 log.go:172] (0xb407730) (0xb407960) Create stream
I0917 17:49:46.266293       7 log.go:172] (0xb407730) (0xb407960) Stream added, broadcasting: 3
I0917 17:49:46.267790       7 log.go:172] (0xb407730) Reply frame received for 3
I0917 17:49:46.267970       7 log.go:172] (0xb407730) (0xb407b20) Create stream
I0917 17:49:46.268041       7 log.go:172] (0xb407730) (0xb407b20) Stream added, broadcasting: 5
I0917 17:49:46.269265       7 log.go:172] (0xb407730) Reply frame received for 5
I0917 17:49:46.321437       7 log.go:172] (0xb407730) Data frame received for 5
I0917 17:49:46.321630       7 log.go:172] (0xb407b20) (5) Data frame handling
I0917 17:49:46.321844       7 log.go:172] (0xb407730) Data frame received for 3
I0917 17:49:46.322041       7 log.go:172] (0xb407960) (3) Data frame handling
I0917 17:49:46.322202       7 log.go:172] (0xb407960) (3) Data frame sent
I0917 17:49:46.322393       7 log.go:172] (0xb407730) Data frame received for 3
I0917 17:49:46.322509       7 log.go:172] (0xb407960) (3) Data frame handling
I0917 17:49:46.323138       7 log.go:172] (0xb407730) Data frame received for 1
I0917 17:49:46.323262       7 log.go:172] (0xb4077a0) (1) Data frame handling
I0917 17:49:46.323387       7 log.go:172] (0xb4077a0) (1) Data frame sent
I0917 17:49:46.323525       7 log.go:172] (0xb407730) (0xb4077a0) Stream removed, broadcasting: 1
I0917 17:49:46.323736       7 log.go:172] (0xb407730) Go away received
I0917 17:49:46.324279       7 log.go:172] (0xb407730) (0xb4077a0) Stream removed, broadcasting: 1
I0917 17:49:46.324480       7 log.go:172] (0xb407730) (0xb407960) Stream removed, broadcasting: 3
I0917 17:49:46.324614       7 log.go:172] (0xb407730) (0xb407b20) Stream removed, broadcasting: 5
Sep 17 17:49:46.324: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Sep 17 17:49:46.325: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:46.325: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.438245       7 log.go:172] (0xb07c2a0) (0xb07c310) Create stream
I0917 17:49:46.438638       7 log.go:172] (0xb07c2a0) (0xb07c310) Stream added, broadcasting: 1
I0917 17:49:46.443491       7 log.go:172] (0xb07c2a0) Reply frame received for 1
I0917 17:49:46.443752       7 log.go:172] (0xb07c2a0) (0xb07c4d0) Create stream
I0917 17:49:46.443876       7 log.go:172] (0xb07c2a0) (0xb07c4d0) Stream added, broadcasting: 3
I0917 17:49:46.445604       7 log.go:172] (0xb07c2a0) Reply frame received for 3
I0917 17:49:46.445762       7 log.go:172] (0xb07c2a0) (0xb07c690) Create stream
I0917 17:49:46.445842       7 log.go:172] (0xb07c2a0) (0xb07c690) Stream added, broadcasting: 5
I0917 17:49:46.447005       7 log.go:172] (0xb07c2a0) Reply frame received for 5
I0917 17:49:46.503869       7 log.go:172] (0xb07c2a0) Data frame received for 5
I0917 17:49:46.504103       7 log.go:172] (0xb07c2a0) Data frame received for 3
I0917 17:49:46.504513       7 log.go:172] (0xb07c4d0) (3) Data frame handling
I0917 17:49:46.504691       7 log.go:172] (0xb07c690) (5) Data frame handling
I0917 17:49:46.504923       7 log.go:172] (0xb07c4d0) (3) Data frame sent
I0917 17:49:46.505173       7 log.go:172] (0xb07c2a0) Data frame received for 3
I0917 17:49:46.505380       7 log.go:172] (0xb07c4d0) (3) Data frame handling
I0917 17:49:46.506433       7 log.go:172] (0xb07c2a0) Data frame received for 1
I0917 17:49:46.506535       7 log.go:172] (0xb07c310) (1) Data frame handling
I0917 17:49:46.506623       7 log.go:172] (0xb07c310) (1) Data frame sent
I0917 17:49:46.506714       7 log.go:172] (0xb07c2a0) (0xb07c310) Stream removed, broadcasting: 1
I0917 17:49:46.506854       7 log.go:172] (0xb07c2a0) Go away received
I0917 17:49:46.507561       7 log.go:172] (0xb07c2a0) (0xb07c310) Stream removed, broadcasting: 1
I0917 17:49:46.507723       7 log.go:172] (0xb07c2a0) (0xb07c4d0) Stream removed, broadcasting: 3
I0917 17:49:46.507840       7 log.go:172] (0xb07c2a0) (0xb07c690) Stream removed, broadcasting: 5
Sep 17 17:49:46.507: INFO: Exec stderr: ""
Sep 17 17:49:46.508: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:46.508: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.615077       7 log.go:172] (0xb20d490) (0xb20d500) Create stream
I0917 17:49:46.615209       7 log.go:172] (0xb20d490) (0xb20d500) Stream added, broadcasting: 1
I0917 17:49:46.618333       7 log.go:172] (0xb20d490) Reply frame received for 1
I0917 17:49:46.618520       7 log.go:172] (0xb20d490) (0xb20d6c0) Create stream
I0917 17:49:46.618599       7 log.go:172] (0xb20d490) (0xb20d6c0) Stream added, broadcasting: 3
I0917 17:49:46.620018       7 log.go:172] (0xb20d490) Reply frame received for 3
I0917 17:49:46.620236       7 log.go:172] (0xb20d490) (0xb07cc40) Create stream
I0917 17:49:46.620314       7 log.go:172] (0xb20d490) (0xb07cc40) Stream added, broadcasting: 5
I0917 17:49:46.621574       7 log.go:172] (0xb20d490) Reply frame received for 5
I0917 17:49:46.680503       7 log.go:172] (0xb20d490) Data frame received for 3
I0917 17:49:46.680773       7 log.go:172] (0xb20d6c0) (3) Data frame handling
I0917 17:49:46.680951       7 log.go:172] (0xb20d6c0) (3) Data frame sent
I0917 17:49:46.681107       7 log.go:172] (0xb20d490) Data frame received for 3
I0917 17:49:46.681249       7 log.go:172] (0xb20d6c0) (3) Data frame handling
I0917 17:49:46.681546       7 log.go:172] (0xb20d490) Data frame received for 5
I0917 17:49:46.681801       7 log.go:172] (0xb07cc40) (5) Data frame handling
I0917 17:49:46.682113       7 log.go:172] (0xb20d490) Data frame received for 1
I0917 17:49:46.682247       7 log.go:172] (0xb20d500) (1) Data frame handling
I0917 17:49:46.682384       7 log.go:172] (0xb20d500) (1) Data frame sent
I0917 17:49:46.682518       7 log.go:172] (0xb20d490) (0xb20d500) Stream removed, broadcasting: 1
I0917 17:49:46.682711       7 log.go:172] (0xb20d490) Go away received
I0917 17:49:46.683215       7 log.go:172] (0xb20d490) (0xb20d500) Stream removed, broadcasting: 1
I0917 17:49:46.683390       7 log.go:172] (0xb20d490) (0xb20d6c0) Stream removed, broadcasting: 3
I0917 17:49:46.683552       7 log.go:172] (0xb20d490) (0xb07cc40) Stream removed, broadcasting: 5
Sep 17 17:49:46.683: INFO: Exec stderr: ""
Sep 17 17:49:46.684: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:46.684: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.797755       7 log.go:172] (0xb20dc70) (0xb20dce0) Create stream
I0917 17:49:46.797877       7 log.go:172] (0xb20dc70) (0xb20dce0) Stream added, broadcasting: 1
I0917 17:49:46.800653       7 log.go:172] (0xb20dc70) Reply frame received for 1
I0917 17:49:46.800771       7 log.go:172] (0xb20dc70) (0xb407e30) Create stream
I0917 17:49:46.800832       7 log.go:172] (0xb20dc70) (0xb407e30) Stream added, broadcasting: 3
I0917 17:49:46.802243       7 log.go:172] (0xb20dc70) Reply frame received for 3
I0917 17:49:46.802446       7 log.go:172] (0xb20dc70) (0xaf54000) Create stream
I0917 17:49:46.802558       7 log.go:172] (0xb20dc70) (0xaf54000) Stream added, broadcasting: 5
I0917 17:49:46.804196       7 log.go:172] (0xb20dc70) Reply frame received for 5
I0917 17:49:46.872234       7 log.go:172] (0xb20dc70) Data frame received for 5
I0917 17:49:46.872493       7 log.go:172] (0xaf54000) (5) Data frame handling
I0917 17:49:46.872697       7 log.go:172] (0xb20dc70) Data frame received for 3
I0917 17:49:46.872928       7 log.go:172] (0xb407e30) (3) Data frame handling
I0917 17:49:46.873178       7 log.go:172] (0xb407e30) (3) Data frame sent
I0917 17:49:46.873384       7 log.go:172] (0xb20dc70) Data frame received for 3
I0917 17:49:46.873551       7 log.go:172] (0xb407e30) (3) Data frame handling
I0917 17:49:46.873719       7 log.go:172] (0xb20dc70) Data frame received for 1
I0917 17:49:46.873803       7 log.go:172] (0xb20dce0) (1) Data frame handling
I0917 17:49:46.873892       7 log.go:172] (0xb20dce0) (1) Data frame sent
I0917 17:49:46.873992       7 log.go:172] (0xb20dc70) (0xb20dce0) Stream removed, broadcasting: 1
I0917 17:49:46.874110       7 log.go:172] (0xb20dc70) Go away received
I0917 17:49:46.874651       7 log.go:172] (0xb20dc70) (0xb20dce0) Stream removed, broadcasting: 1
I0917 17:49:46.874863       7 log.go:172] (0xb20dc70) (0xb407e30) Stream removed, broadcasting: 3
I0917 17:49:46.875018       7 log.go:172] (0xb20dc70) (0xaf54000) Stream removed, broadcasting: 5
Sep 17 17:49:46.875: INFO: Exec stderr: ""
Sep 17 17:49:46.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1132 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 17:49:46.875: INFO: >>> kubeConfig: /root/.kube/config
I0917 17:49:46.987154       7 log.go:172] (0xb6dbf80) (0xa75a000) Create stream
I0917 17:49:46.987256       7 log.go:172] (0xb6dbf80) (0xa75a000) Stream added, broadcasting: 1
I0917 17:49:46.990849       7 log.go:172] (0xb6dbf80) Reply frame received for 1
I0917 17:49:46.991112       7 log.go:172] (0xb6dbf80) (0xb07ce00) Create stream
I0917 17:49:46.991242       7 log.go:172] (0xb6dbf80) (0xb07ce00) Stream added, broadcasting: 3
I0917 17:49:46.993412       7 log.go:172] (0xb6dbf80) Reply frame received for 3
I0917 17:49:46.993537       7 log.go:172] (0xb6dbf80) (0xb07cfc0) Create stream
I0917 17:49:46.993599       7 log.go:172] (0xb6dbf80) (0xb07cfc0) Stream added, broadcasting: 5
I0917 17:49:46.994950       7 log.go:172] (0xb6dbf80) Reply frame received for 5
I0917 17:49:47.061695       7 log.go:172] (0xb6dbf80) Data frame received for 3
I0917 17:49:47.061949       7 log.go:172] (0xb07ce00) (3) Data frame handling
I0917 17:49:47.062118       7 log.go:172] (0xb6dbf80) Data frame received for 5
I0917 17:49:47.062365       7 log.go:172] (0xb07cfc0) (5) Data frame handling
I0917 17:49:47.062573       7 log.go:172] (0xb07ce00) (3) Data frame sent
I0917 17:49:47.062836       7 log.go:172] (0xb6dbf80) Data frame received for 3
I0917 17:49:47.063013       7 log.go:172] (0xb07ce00) (3) Data frame handling
I0917 17:49:47.063167       7 log.go:172] (0xb6dbf80) Data frame received for 1
I0917 17:49:47.063263       7 log.go:172] (0xa75a000) (1) Data frame handling
I0917 17:49:47.063363       7 log.go:172] (0xa75a000) (1) Data frame sent
I0917 17:49:47.063453       7 log.go:172] (0xb6dbf80) (0xa75a000) Stream removed, broadcasting: 1
I0917 17:49:47.063554       7 log.go:172] (0xb6dbf80) Go away received
I0917 17:49:47.064050       7 log.go:172] (0xb6dbf80) (0xa75a000) Stream removed, broadcasting: 1
I0917 17:49:47.064344       7 log.go:172] (0xb6dbf80) (0xb07ce00) Stream removed, broadcasting: 3
I0917 17:49:47.064495       7 log.go:172] (0xb6dbf80) (0xb07cfc0) Stream removed, broadcasting: 5
Sep 17 17:49:47.064: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:49:47.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1132" for this suite.

• [SLOW TEST:13.993 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3766,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:49:47.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Sep 17 17:49:47.141: INFO: Waiting up to 5m0s for pod "var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe" in namespace "var-expansion-2542" to be "success or failure"
Sep 17 17:49:47.192: INFO: Pod "var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 50.627563ms
Sep 17 17:49:49.199: INFO: Pod "var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057530314s
Sep 17 17:49:51.206: INFO: Pod "var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064547566s
STEP: Saw pod success
Sep 17 17:49:51.206: INFO: Pod "var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe" satisfied condition "success or failure"
Sep 17 17:49:51.211: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe container dapi-container: 
STEP: delete the pod
Sep 17 17:49:51.257: INFO: Waiting for pod var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe to disappear
Sep 17 17:49:51.269: INFO: Pod var-expansion-a55bd10b-7d75-4a7b-a976-d62a01a48dbe no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:49:51.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2542" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3781,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:49:51.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:50:02.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4692" for this suite.

• [SLOW TEST:11.182 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":231,"skipped":3800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:50:02.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 17 17:50:02.562: INFO: Waiting up to 5m0s for pod "pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8" in namespace "emptydir-6091" to be "success or failure"
Sep 17 17:50:02.567: INFO: Pod "pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4192ms
Sep 17 17:50:04.575: INFO: Pod "pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011958932s
Sep 17 17:50:06.581: INFO: Pod "pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018095285s
STEP: Saw pod success
Sep 17 17:50:06.581: INFO: Pod "pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8" satisfied condition "success or failure"
Sep 17 17:50:06.585: INFO: Trying to get logs from node jerma-worker pod pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8 container test-container: 
STEP: delete the pod
Sep 17 17:50:06.622: INFO: Waiting for pod pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8 to disappear
Sep 17 17:50:06.778: INFO: Pod pod-1a647a29-9d7a-4805-8e5c-286f0c76dce8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:50:06.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6091" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3828,"failed":0}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:50:06.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Sep 17 17:50:06.833: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Sep 17 17:50:20.318: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Sep 17 17:50:22.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 17 17:50:24.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 17 17:50:26.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961820, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 17 17:50:29.549: INFO: Waited 529.171633ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:50:30.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4525" for this suite.

• [SLOW TEST:23.745 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":233,"skipped":3831,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:50:30.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:50:30.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Sep 17 17:50:48.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 create -f -'
Sep 17 17:50:52.625: INFO: stderr: ""
Sep 17 17:50:52.626: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Sep 17 17:50:52.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 delete e2e-test-crd-publish-openapi-6075-crds test-foo'
Sep 17 17:50:53.779: INFO: stderr: ""
Sep 17 17:50:53.780: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Sep 17 17:50:53.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 apply -f -'
Sep 17 17:50:55.201: INFO: stderr: ""
Sep 17 17:50:55.201: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Sep 17 17:50:55.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 delete e2e-test-crd-publish-openapi-6075-crds test-foo'
Sep 17 17:50:56.329: INFO: stderr: ""
Sep 17 17:50:56.329: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Sep 17 17:50:56.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 create -f -'
Sep 17 17:50:57.741: INFO: rc: 1
Sep 17 17:50:57.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 apply -f -'
Sep 17 17:50:59.108: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Sep 17 17:50:59.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 create -f -'
Sep 17 17:51:00.503: INFO: rc: 1
Sep 17 17:51:00.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6642 apply -f -'
Sep 17 17:51:01.900: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Sep 17 17:51:01.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds'
Sep 17 17:51:03.395: INFO: stderr: ""
Sep 17 17:51:03.395: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6075-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Sep 17 17:51:03.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds.metadata'
Sep 17 17:51:04.861: INFO: stderr: ""
Sep 17 17:51:04.861: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6075-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Sep 17 17:51:04.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds.spec'
Sep 17 17:51:06.260: INFO: stderr: ""
Sep 17 17:51:06.260: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6075-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 17 17:51:06.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds.spec.bars'
Sep 17 17:51:07.737: INFO: stderr: ""
Sep 17 17:51:07.737: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6075-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 17 17:51:07.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds.spec.bars2'
Sep 17 17:51:09.188: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:51:27.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6642" for this suite.

• [SLOW TEST:56.550 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":234,"skipped":3864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:51:27.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-535ab377-0cfd-441b-8b3a-c332d5642b76 in namespace container-probe-5332
Sep 17 17:51:31.257: INFO: Started pod busybox-535ab377-0cfd-441b-8b3a-c332d5642b76 in namespace container-probe-5332
STEP: checking the pod's current state and verifying that restartCount is present
Sep 17 17:51:31.263: INFO: Initial restart count of pod busybox-535ab377-0cfd-441b-8b3a-c332d5642b76 is 0
Sep 17 17:52:23.427: INFO: Restart count of pod container-probe-5332/busybox-535ab377-0cfd-441b-8b3a-c332d5642b76 is now 1 (52.164668236s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:52:23.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5332" for this suite.

• [SLOW TEST:56.387 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3899,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:52:23.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ff39d405-d54d-4601-a3c6-b7985d6b31ed
STEP: Creating a pod to test consume configMaps
Sep 17 17:52:23.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab" in namespace "configmap-3510" to be "success or failure"
Sep 17 17:52:23.862: INFO: Pod "pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab": Phase="Pending", Reason="", readiness=false. Elapsed: 277.858065ms
Sep 17 17:52:25.869: INFO: Pod "pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284471168s
Sep 17 17:52:27.902: INFO: Pod "pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317607146s
STEP: Saw pod success
Sep 17 17:52:27.902: INFO: Pod "pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab" satisfied condition "success or failure"
Sep 17 17:52:27.906: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab container configmap-volume-test: 
STEP: delete the pod
Sep 17 17:52:27.955: INFO: Waiting for pod pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab to disappear
Sep 17 17:52:27.959: INFO: Pod pod-configmaps-57e6c63e-d5bf-4d9a-a088-7794cb8222ab no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:52:27.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3510" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3910,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:52:27.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-a6a362be-342e-4a99-9051-1b6c8e23ce60
STEP: Creating a pod to test consume secrets
Sep 17 17:52:28.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12" in namespace "projected-8815" to be "success or failure"
Sep 17 17:52:28.092: INFO: Pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12": Phase="Pending", Reason="", readiness=false. Elapsed: 12.164854ms
Sep 17 17:52:30.099: INFO: Pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019185514s
Sep 17 17:52:32.104: INFO: Pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12": Phase="Running", Reason="", readiness=true. Elapsed: 4.024494096s
Sep 17 17:52:34.111: INFO: Pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031665199s
STEP: Saw pod success
Sep 17 17:52:34.112: INFO: Pod "pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12" satisfied condition "success or failure"
Sep 17 17:52:34.118: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12 container projected-secret-volume-test: 
STEP: delete the pod
Sep 17 17:52:34.177: INFO: Waiting for pod pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12 to disappear
Sep 17 17:52:34.190: INFO: Pod pod-projected-secrets-a6921cf3-dbe1-49c7-96c2-a15a49380b12 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:52:34.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8815" for this suite.

• [SLOW TEST:6.229 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3920,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:52:34.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 17 17:52:34.283: INFO: Waiting up to 5m0s for pod "pod-04944a38-f909-48d1-ab4a-b70f0a191452" in namespace "emptydir-3983" to be "success or failure"
Sep 17 17:52:34.292: INFO: Pod "pod-04944a38-f909-48d1-ab4a-b70f0a191452": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630143ms
Sep 17 17:52:36.299: INFO: Pod "pod-04944a38-f909-48d1-ab4a-b70f0a191452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015458793s
Sep 17 17:52:38.306: INFO: Pod "pod-04944a38-f909-48d1-ab4a-b70f0a191452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02228569s
STEP: Saw pod success
Sep 17 17:52:38.306: INFO: Pod "pod-04944a38-f909-48d1-ab4a-b70f0a191452" satisfied condition "success or failure"
Sep 17 17:52:38.311: INFO: Trying to get logs from node jerma-worker pod pod-04944a38-f909-48d1-ab4a-b70f0a191452 container test-container: 
STEP: delete the pod
Sep 17 17:52:38.381: INFO: Waiting for pod pod-04944a38-f909-48d1-ab4a-b70f0a191452 to disappear
Sep 17 17:52:38.387: INFO: Pod pod-04944a38-f909-48d1-ab4a-b70f0a191452 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:52:38.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3983" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3921,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:52:38.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3214
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3214
STEP: creating replication controller externalsvc in namespace services-3214
I0917 17:52:38.587644       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3214, replica count: 2
I0917 17:52:41.639534       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:52:44.640405       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Sep 17 17:52:44.684: INFO: Creating new exec pod
Sep 17 17:52:48.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3214 execpod6x5qm -- /bin/sh -x -c nslookup clusterip-service'
Sep 17 17:52:50.208: INFO: stderr: "I0917 17:52:49.946678    4726 log.go:172] (0x29ce000) (0x29ce070) Create stream\nI0917 17:52:49.951363    4726 log.go:172] (0x29ce000) (0x29ce070) Stream added, broadcasting: 1\nI0917 17:52:49.965023    4726 log.go:172] (0x29ce000) Reply frame received for 1\nI0917 17:52:49.965541    4726 log.go:172] (0x29ce000) (0x24a22a0) Create stream\nI0917 17:52:49.965608    4726 log.go:172] (0x29ce000) (0x24a22a0) Stream added, broadcasting: 3\nI0917 17:52:49.966808    4726 log.go:172] (0x29ce000) Reply frame received for 3\nI0917 17:52:49.967040    4726 log.go:172] (0x29ce000) (0x26d49a0) Create stream\nI0917 17:52:49.967125    4726 log.go:172] (0x29ce000) (0x26d49a0) Stream added, broadcasting: 5\nI0917 17:52:49.968187    4726 log.go:172] (0x29ce000) Reply frame received for 5\nI0917 17:52:50.059278    4726 log.go:172] (0x29ce000) Data frame received for 5\nI0917 17:52:50.059533    4726 log.go:172] (0x26d49a0) (5) Data frame handling\nI0917 17:52:50.060203    4726 log.go:172] (0x26d49a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0917 17:52:50.190577    4726 log.go:172] (0x29ce000) Data frame received for 3\nI0917 17:52:50.190777    4726 log.go:172] (0x24a22a0) (3) Data frame handling\nI0917 17:52:50.190979    4726 log.go:172] (0x24a22a0) (3) Data frame sent\nI0917 17:52:50.191412    4726 log.go:172] (0x29ce000) Data frame received for 3\nI0917 17:52:50.191537    4726 log.go:172] (0x24a22a0) (3) Data frame handling\nI0917 17:52:50.191650    4726 log.go:172] (0x24a22a0) (3) Data frame sent\nI0917 17:52:50.191927    4726 log.go:172] (0x29ce000) Data frame received for 5\nI0917 17:52:50.192130    4726 log.go:172] (0x26d49a0) (5) Data frame handling\nI0917 17:52:50.192522    4726 log.go:172] (0x29ce000) Data frame received for 3\nI0917 17:52:50.192725    4726 log.go:172] (0x24a22a0) (3) Data frame handling\nI0917 17:52:50.194157    4726 log.go:172] (0x29ce000) Data frame received for 1\nI0917 17:52:50.194336    4726 log.go:172] (0x29ce070) (1) Data frame handling\nI0917 17:52:50.194534    4726 log.go:172] (0x29ce070) (1) Data frame sent\nI0917 17:52:50.196254    4726 log.go:172] (0x29ce000) (0x29ce070) Stream removed, broadcasting: 1\nI0917 17:52:50.197255    4726 log.go:172] (0x29ce000) Go away received\nI0917 17:52:50.200079    4726 log.go:172] (0x29ce000) (0x29ce070) Stream removed, broadcasting: 1\nI0917 17:52:50.200392    4726 log.go:172] (0x29ce000) (0x24a22a0) Stream removed, broadcasting: 3\nI0917 17:52:50.200620    4726 log.go:172] (0x29ce000) (0x26d49a0) Stream removed, broadcasting: 5\n"
Sep 17 17:52:50.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3214.svc.cluster.local\tcanonical name = externalsvc.services-3214.svc.cluster.local.\nName:\texternalsvc.services-3214.svc.cluster.local\nAddress: 10.102.26.21\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3214, will wait for the garbage collector to delete the pods
Sep 17 17:52:50.272: INFO: Deleting ReplicationController externalsvc took: 7.971732ms
Sep 17 17:52:50.573: INFO: Terminating ReplicationController externalsvc pods took: 300.881038ms
Sep 17 17:52:57.862: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:52:57.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3214" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.515 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":239,"skipped":3930,"failed":0}
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:52:57.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Sep 17 17:53:04.542: INFO: Successfully updated pod "adopt-release-mlpg5"
STEP: Checking that the Job readopts the Pod
Sep 17 17:53:04.542: INFO: Waiting up to 15m0s for pod "adopt-release-mlpg5" in namespace "job-274" to be "adopted"
Sep 17 17:53:04.572: INFO: Pod "adopt-release-mlpg5": Phase="Running", Reason="", readiness=true. Elapsed: 29.817413ms
Sep 17 17:53:06.579: INFO: Pod "adopt-release-mlpg5": Phase="Running", Reason="", readiness=true. Elapsed: 2.037007795s
Sep 17 17:53:06.580: INFO: Pod "adopt-release-mlpg5" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Sep 17 17:53:07.095: INFO: Successfully updated pod "adopt-release-mlpg5"
STEP: Checking that the Job releases the Pod
Sep 17 17:53:07.096: INFO: Waiting up to 15m0s for pod "adopt-release-mlpg5" in namespace "job-274" to be "released"
Sep 17 17:53:07.113: INFO: Pod "adopt-release-mlpg5": Phase="Running", Reason="", readiness=true. Elapsed: 16.545975ms
Sep 17 17:53:07.113: INFO: Pod "adopt-release-mlpg5" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:53:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-274" for this suite.

• [SLOW TEST:9.292 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":240,"skipped":3930,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:53:07.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:53:07.317: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Sep 17 17:53:07.331: INFO: Pod name sample-pod: Found 0 pods out of 1
Sep 17 17:53:12.343: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 17 17:53:12.344: INFO: Creating deployment "test-rolling-update-deployment"
Sep 17 17:53:12.375: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Sep 17 17:53:12.396: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Sep 17 17:53:14.454: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Sep 17 17:53:14.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961992, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961992, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961992, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961992, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 17 17:53:16.465: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Sep 17 17:53:16.484: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-2215 /apis/apps/v1/namespaces/deployment-2215/deployments/test-rolling-update-deployment ce352366-7df1-4864-873e-39b72b47afde 1089246 1 2020-09-17 17:53:12 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xb536038  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-17 17:53:12 +0000 UTC,LastTransitionTime:2020-09-17 17:53:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-09-17 17:53:15 +0000 UTC,LastTransitionTime:2020-09-17 17:53:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Sep 17 17:53:16.493: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/test-rolling-update-deployment-67cf4f6444 a3c171d2-2736-492e-94b9-5ef61237afa3 1089235 1 2020-09-17 17:53:12 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ce352366-7df1-4864-873e-39b72b47afde 0xb536497 0xb536498}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xb536508  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:53:16.493: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Sep 17 17:53:16.494: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/test-rolling-update-controller 3c5a948f-3f9e-440c-a039-8e6b2a8e7fd6 1089244 2 2020-09-17 17:53:07 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ce352366-7df1-4864-873e-39b72b47afde 0xb5363c7 0xb5363c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xb536428  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Sep 17 17:53:16.500: INFO: Pod "test-rolling-update-deployment-67cf4f6444-chktd" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-chktd test-rolling-update-deployment-67cf4f6444- deployment-2215 /api/v1/namespaces/deployment-2215/pods/test-rolling-update-deployment-67cf4f6444-chktd 308b73ca-ba53-43a2-8e2d-6815f7f50f2c 1089234 0 2020-09-17 17:53:12 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 a3c171d2-2736-492e-94b9-5ef61237afa3 0xb3fb0a7 0xb3fb0a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fwf6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fwf6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fwf6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:53:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:53:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.2.105,StartTime:2020-09-17 17:53:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:53:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://278b9237652132e5877979e409bb099ee3dbf4d1edf0e79dcbda04067fe77a6b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:53:16.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2215" for this suite.

• [SLOW TEST:9.303 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":241,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:53:16.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Sep 17 17:53:16.640: INFO: PodSpec: initContainers in spec.initContainers
Sep 17 17:54:02.891: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4e925c4c-a6d0-4d95-aee9-509e1fa14241", GenerateName:"", Namespace:"init-container-4569", SelfLink:"/api/v1/namespaces/init-container-4569/pods/pod-init-4e925c4c-a6d0-4d95-aee9-509e1fa14241", UID:"abb24da1-857a-4b60-a2f0-fafef23057a4", ResourceVersion:"1089450", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735961996, loc:(*time.Location)(0x610c660)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"639343391"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-65vdf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xbda61e0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-65vdf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-65vdf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-65vdf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xbe50258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x902c120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xbe502e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xbe50300)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xbe50308), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xbe5030c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961996, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961996, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961996, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735961996, loc:(*time.Location)(0x610c660)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.8", PodIP:"10.244.1.142", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.142"}}, StartTime:(*v1.Time)(0xbda6280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xbda62a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x783e3c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://84297cff1b64a4b500f4f5029775476bd81c22e46f34b3df95230751e1751782", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x9748100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x97480f0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xbe5038f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:54:02.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4569" for this suite.

• [SLOW TEST:46.405 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":242,"skipped":3962,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:54:02.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:54:34.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7268" for this suite.
STEP: Destroying namespace "nsdeletetest-4529" for this suite.
Sep 17 17:54:34.427: INFO: Namespace nsdeletetest-4529 was already deleted
STEP: Destroying namespace "nsdeletetest-8219" for this suite.

• [SLOW TEST:31.511 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":243,"skipped":3979,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:54:34.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7443
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7443
STEP: creating replication controller externalsvc in namespace services-7443
I0917 17:54:34.730504       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7443, replica count: 2
I0917 17:54:37.782139       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:54:40.782800       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:54:43.783661       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Sep 17 17:54:43.883: INFO: Creating new exec pod
Sep 17 17:54:47.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7443 execpodktz76 -- /bin/sh -x -c nslookup nodeport-service'
Sep 17 17:54:49.318: INFO: stderr: "I0917 17:54:49.174671    4749 log.go:172] (0x271dc00) (0x271dc70) Create stream\nI0917 17:54:49.177610    4749 log.go:172] (0x271dc00) (0x271dc70) Stream added, broadcasting: 1\nI0917 17:54:49.192019    4749 log.go:172] (0x271dc00) Reply frame received for 1\nI0917 17:54:49.192514    4749 log.go:172] (0x271dc00) (0x24a2150) Create stream\nI0917 17:54:49.192585    4749 log.go:172] (0x271dc00) (0x24a2150) Stream added, broadcasting: 3\nI0917 17:54:49.193647    4749 log.go:172] (0x271dc00) Reply frame received for 3\nI0917 17:54:49.193874    4749 log.go:172] (0x271dc00) (0x25c60e0) Create stream\nI0917 17:54:49.193943    4749 log.go:172] (0x271dc00) (0x25c60e0) Stream added, broadcasting: 5\nI0917 17:54:49.194827    4749 log.go:172] (0x271dc00) Reply frame received for 5\nI0917 17:54:49.289745    4749 log.go:172] (0x271dc00) Data frame received for 5\nI0917 17:54:49.290080    4749 log.go:172] (0x25c60e0) (5) Data frame handling\nI0917 17:54:49.290738    4749 log.go:172] (0x25c60e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0917 17:54:49.298423    4749 log.go:172] (0x271dc00) Data frame received for 3\nI0917 17:54:49.298501    4749 log.go:172] (0x24a2150) (3) Data frame handling\nI0917 17:54:49.298580    4749 log.go:172] (0x24a2150) (3) Data frame sent\nI0917 17:54:49.299853    4749 log.go:172] (0x271dc00) Data frame received for 3\nI0917 17:54:49.300106    4749 log.go:172] (0x24a2150) (3) Data frame handling\nI0917 17:54:49.300480    4749 log.go:172] (0x24a2150) (3) Data frame sent\nI0917 17:54:49.300712    4749 log.go:172] (0x271dc00) Data frame received for 3\nI0917 17:54:49.300941    4749 log.go:172] (0x271dc00) Data frame received for 5\nI0917 17:54:49.301310    4749 log.go:172] (0x25c60e0) (5) Data frame handling\nI0917 17:54:49.301648    4749 log.go:172] (0x24a2150) (3) Data frame handling\nI0917 17:54:49.302538    4749 log.go:172] (0x271dc00) Data frame received for 1\nI0917 17:54:49.302707    4749 log.go:172] (0x271dc70) (1) Data frame handling\nI0917 17:54:49.302884    4749 log.go:172] (0x271dc70) (1) Data frame sent\nI0917 17:54:49.303921    4749 log.go:172] (0x271dc00) (0x271dc70) Stream removed, broadcasting: 1\nI0917 17:54:49.306753    4749 log.go:172] (0x271dc00) Go away received\nI0917 17:54:49.309813    4749 log.go:172] (0x271dc00) (0x271dc70) Stream removed, broadcasting: 1\nI0917 17:54:49.310183    4749 log.go:172] (0x271dc00) (0x24a2150) Stream removed, broadcasting: 3\nI0917 17:54:49.310405    4749 log.go:172] (0x271dc00) (0x25c60e0) Stream removed, broadcasting: 5\n"
Sep 17 17:54:49.319: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7443.svc.cluster.local\tcanonical name = externalsvc.services-7443.svc.cluster.local.\nName:\texternalsvc.services-7443.svc.cluster.local\nAddress: 10.97.234.10\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7443, will wait for the garbage collector to delete the pods
Sep 17 17:54:49.387: INFO: Deleting ReplicationController externalsvc took: 8.813768ms
Sep 17 17:54:49.688: INFO: Terminating ReplicationController externalsvc pods took: 301.046025ms
Sep 17 17:54:54.472: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:54:54.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7443" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.151 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":244,"skipped":3995,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:54:54.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 17 17:55:02.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 17 17:55:02.820: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 17 17:55:04.821: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 17 17:55:04.828: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 17 17:55:06.821: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 17 17:55:06.827: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:55:06.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6225" for this suite.

• [SLOW TEST:12.272 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3995,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:55:06.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Sep 17 17:55:06.940: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:55:15.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4070" for this suite.

• [SLOW TEST:8.646 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":246,"skipped":3997,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:55:15.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Sep 17 17:55:19.658: INFO: &Pod{ObjectMeta:{send-events-cc0aafca-e6e7-4810-ae10-7b15231f3aed  events-3573 /api/v1/namespaces/events-3573/pods/send-events-cc0aafca-e6e7-4810-ae10-7b15231f3aed 81fcca75-a048-4efe-b877-1064c3ec644d 1089891 0 2020-09-17 17:55:15 +0000 UTC   map[name:foo time:621239867] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2bvv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2bvv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2bvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:55:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:55:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-17 17:55:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.1.145,StartTime:2020-09-17 17:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-17 17:55:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6bd110a5a7212425f1194d63bf224aee3fd119c4c5f646a5d101a0e21b8cbe29,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Sep 17 17:55:21.672: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Sep 17 17:55:23.679: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:55:23.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3573" for this suite.

• [SLOW TEST:8.218 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":247,"skipped":4009,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:55:23.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4005.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4005.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4005.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4005.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.190.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.190.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.190.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.190.112_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4005.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4005.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4005.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4005.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4005.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4005.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.190.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.190.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.190.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.190.112_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 17 17:55:29.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.951: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.955: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.982: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:29.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:30.017: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:55:35.024: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.035: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.071: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.081: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.085: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:35.109: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:55:40.025: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.031: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.036: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.039: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.060: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.064: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.068: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.071: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:40.097: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:55:45.024: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.029: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.034: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.038: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.070: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.075: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.078: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.083: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:45.111: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:55:50.030: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.035: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.040: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.045: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.063: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.065: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.068: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.071: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:50.093: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:55:55.024: INFO: Unable to read wheezy_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.029: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.049: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.081: INFO: Unable to read jessie_udp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.085: INFO: Unable to read jessie_tcp@dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.093: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local from pod dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5: the server could not find the requested resource (get pods dns-test-d99b0926-5e78-464d-a761-6a52e99563d5)
Sep 17 17:55:55.218: INFO: Lookups using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 failed for: [wheezy_udp@dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@dns-test-service.dns-4005.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_udp@dns-test-service.dns-4005.svc.cluster.local jessie_tcp@dns-test-service.dns-4005.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4005.svc.cluster.local]

Sep 17 17:56:00.101: INFO: DNS probes using dns-4005/dns-test-d99b0926-5e78-464d-a761-6a52e99563d5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:00.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4005" for this suite.

• [SLOW TEST:37.146 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":248,"skipped":4062,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:00.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-266df2ea-61f5-418f-ab07-efc3150918e9
STEP: Creating a pod to test consume secrets
Sep 17 17:56:01.013: INFO: Waiting up to 5m0s for pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902" in namespace "secrets-6203" to be "success or failure"
Sep 17 17:56:01.030: INFO: Pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902": Phase="Pending", Reason="", readiness=false. Elapsed: 17.050023ms
Sep 17 17:56:03.061: INFO: Pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047467521s
Sep 17 17:56:05.068: INFO: Pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054851243s
Sep 17 17:56:07.075: INFO: Pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061630956s
STEP: Saw pod success
Sep 17 17:56:07.075: INFO: Pod "pod-secrets-dadef072-636b-4efb-83ef-29844e45e902" satisfied condition "success or failure"
Sep 17 17:56:07.080: INFO: Trying to get logs from node jerma-worker pod pod-secrets-dadef072-636b-4efb-83ef-29844e45e902 container secret-volume-test: 
STEP: delete the pod
Sep 17 17:56:07.114: INFO: Waiting for pod pod-secrets-dadef072-636b-4efb-83ef-29844e45e902 to disappear
Sep 17 17:56:07.118: INFO: Pod pod-secrets-dadef072-636b-4efb-83ef-29844e45e902 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:07.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6203" for this suite.

• [SLOW TEST:6.283 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4063,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:07.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Sep 17 17:56:07.259: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090148 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 17 17:56:07.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090149 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep 17 17:56:07.261: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090150 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Sep 17 17:56:17.350: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090191 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 17 17:56:17.351: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090192 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Sep 17 17:56:17.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-label-changed 16f89ded-1a22-41e6-b57b-33c5bfde8610 1090193 0 2020-09-17 17:56:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:17.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2494" for this suite.

• [SLOW TEST:10.196 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":250,"skipped":4069,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:17.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:22.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5677" for this suite.

• [SLOW TEST:5.161 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":251,"skipped":4072,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:22.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-50b1490f-4e3d-4b87-8f1e-d4a3b0c475a4
STEP: Creating configMap with name cm-test-opt-upd-2108f302-5217-4a4d-9b34-87ecec6cd330
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-50b1490f-4e3d-4b87-8f1e-d4a3b0c475a4
STEP: Updating configmap cm-test-opt-upd-2108f302-5217-4a4d-9b34-87ecec6cd330
STEP: Creating configMap with name cm-test-opt-create-b0d6a0e8-193a-4747-bd4e-ca4a19309b27
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:32.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7834" for this suite.

• [SLOW TEST:10.326 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4083,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:32.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3c5edd24-f7b9-41fc-af08-4df4d390a6e3
STEP: Creating a pod to test consume configMaps
Sep 17 17:56:32.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5" in namespace "projected-2115" to be "success or failure"
Sep 17 17:56:32.976: INFO: Pod "pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.696023ms
Sep 17 17:56:35.019: INFO: Pod "pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064681167s
Sep 17 17:56:37.026: INFO: Pod "pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070994187s
STEP: Saw pod success
Sep 17 17:56:37.026: INFO: Pod "pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5" satisfied condition "success or failure"
Sep 17 17:56:37.031: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 17 17:56:37.067: INFO: Waiting for pod pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5 to disappear
Sep 17 17:56:37.091: INFO: Pod pod-projected-configmaps-d48742ea-73e9-411c-b2a6-6fdc127658a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:37.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2115" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:37.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:48.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7048" for this suite.

• [SLOW TEST:11.161 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":254,"skipped":4153,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:48.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:56:48.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198" in namespace "projected-8775" to be "success or failure"
Sep 17 17:56:48.428: INFO: Pod "downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198": Phase="Pending", Reason="", readiness=false. Elapsed: 14.864432ms
Sep 17 17:56:50.439: INFO: Pod "downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025703009s
Sep 17 17:56:52.445: INFO: Pod "downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032121726s
STEP: Saw pod success
Sep 17 17:56:52.446: INFO: Pod "downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198" satisfied condition "success or failure"
Sep 17 17:56:52.450: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198 container client-container: 
STEP: delete the pod
Sep 17 17:56:52.644: INFO: Waiting for pod downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198 to disappear
Sep 17 17:56:52.672: INFO: Pod downwardapi-volume-b2fe4cc0-f523-4ba0-aa40-2bb67b740198 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:56:52.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8775" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4164,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:56:52.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 17:57:08.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 17:57:10.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962228, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962228, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962228, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962228, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 17:57:13.129: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Sep 17 17:57:17.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1523 to-be-attached-pod -i -c=container1'
Sep 17 17:57:18.437: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:57:18.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1523" for this suite.
STEP: Destroying namespace "webhook-1523-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.849 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":256,"skipped":4194,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:57:18.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:57:18.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad" in namespace "downward-api-9854" to be "success or failure"
Sep 17 17:57:18.638: INFO: Pod "downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad": Phase="Pending", Reason="", readiness=false. Elapsed: 22.381484ms
Sep 17 17:57:20.645: INFO: Pod "downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029103833s
Sep 17 17:57:22.651: INFO: Pod "downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03514491s
STEP: Saw pod success
Sep 17 17:57:22.651: INFO: Pod "downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad" satisfied condition "success or failure"
Sep 17 17:57:22.674: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad container client-container: 
STEP: delete the pod
Sep 17 17:57:22.692: INFO: Waiting for pod downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad to disappear
Sep 17 17:57:22.696: INFO: Pod downwardapi-volume-386197bb-c6bb-420b-b1d2-fa27b362ddad no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:57:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9854" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4200,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:57:22.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:57:22.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4240" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":258,"skipped":4209,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:57:22.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-8m8b9 in namespace proxy-5965
I0917 17:57:23.014816       7 runners.go:189] Created replication controller with name: proxy-service-8m8b9, namespace: proxy-5965, replica count: 1
I0917 17:57:24.066297       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:57:25.069722       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:57:26.070411       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:57:27.071167       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 17:57:28.072331       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:29.073350       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:30.074105       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:31.074868       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:32.075625       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:33.076385       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0917 17:57:34.077159       7 runners.go:189] proxy-service-8m8b9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 17:57:34.089: INFO: setup took 11.149217837s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Sep 17 17:57:34.102: INFO: (0) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 11.484334ms)
Sep 17 17:57:34.102: INFO: (0) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 12.503044ms)
Sep 17 17:57:34.103: INFO: (0) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 12.734903ms)
Sep 17 17:57:34.103: INFO: (0) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 12.652982ms)
Sep 17 17:57:34.103: INFO: (0) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 13.115074ms)
Sep 17 17:57:34.108: INFO: (0) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 17.694091ms)
Sep 17 17:57:34.108: INFO: (0) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 17.950338ms)
Sep 17 17:57:34.110: INFO: (0) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 20.020826ms)
Sep 17 17:57:34.110: INFO: (0) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 20.078234ms)
Sep 17 17:57:34.110: INFO: (0) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 20.288849ms)
Sep 17 17:57:34.111: INFO: (0) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 20.233745ms)
Sep 17 17:57:34.111: INFO: (0) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 19.907185ms)
Sep 17 17:57:34.111: INFO: (0) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: ... (200; 4.157229ms)
Sep 17 17:57:34.118: INFO: (1) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 5.862991ms)
Sep 17 17:57:34.118: INFO: (1) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 6.600763ms)
Sep 17 17:57:34.119: INFO: (1) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 8.532523ms)
Sep 17 17:57:34.121: INFO: (1) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 8.623601ms)
Sep 17 17:57:34.121: INFO: (1) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 8.767491ms)
Sep 17 17:57:34.122: INFO: (1) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 9.529429ms)
Sep 17 17:57:34.122: INFO: (1) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 10.198066ms)
Sep 17 17:57:34.128: INFO: (2) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 5.600115ms)
Sep 17 17:57:34.129: INFO: (2) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 6.660453ms)
Sep 17 17:57:34.129: INFO: (2) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.727701ms)
Sep 17 17:57:34.132: INFO: (2) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 9.151704ms)
Sep 17 17:57:34.132: INFO: (2) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 9.116159ms)
Sep 17 17:57:34.132: INFO: (2) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 9.455304ms)
Sep 17 17:57:34.132: INFO: (2) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 10.119867ms)
Sep 17 17:57:34.133: INFO: (2) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 10.360678ms)
Sep 17 17:57:34.133: INFO: (2) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 10.279822ms)
Sep 17 17:57:34.134: INFO: (2) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 10.399194ms)
Sep 17 17:57:34.134: INFO: (2) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 10.514756ms)
Sep 17 17:57:34.135: INFO: (2) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 12.018178ms)
Sep 17 17:57:34.141: INFO: (3) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 5.539169ms)
Sep 17 17:57:34.141: INFO: (3) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 5.603184ms)
Sep 17 17:57:34.141: INFO: (3) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 6.546189ms)
Sep 17 17:57:34.142: INFO: (3) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 6.727608ms)
Sep 17 17:57:34.143: INFO: (3) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 7.823875ms)
Sep 17 17:57:34.143: INFO: (3) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 7.896143ms)
Sep 17 17:57:34.144: INFO: (3) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 8.593895ms)
Sep 17 17:57:34.144: INFO: (3) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 8.767804ms)
Sep 17 17:57:34.145: INFO: (3) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 8.975467ms)
Sep 17 17:57:34.145: INFO: (3) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 9.230502ms)
Sep 17 17:57:34.145: INFO: (3) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 9.590041ms)
Sep 17 17:57:34.145: INFO: (3) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 9.807622ms)
Sep 17 17:57:34.145: INFO: (3) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 9.728192ms)
Sep 17 17:57:34.146: INFO: (3) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 10.239237ms)
Sep 17 17:57:34.146: INFO: (3) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 10.757006ms)
Sep 17 17:57:34.152: INFO: (4) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 4.882409ms)
Sep 17 17:57:34.152: INFO: (4) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 5.441858ms)
Sep 17 17:57:34.152: INFO: (4) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 5.464147ms)
Sep 17 17:57:34.153: INFO: (4) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 5.871212ms)
Sep 17 17:57:34.153: INFO: (4) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.519086ms)
Sep 17 17:57:34.153: INFO: (4) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 6.752244ms)
Sep 17 17:57:34.154: INFO: (4) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 7.038018ms)
Sep 17 17:57:34.154: INFO: (4) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 7.39719ms)
Sep 17 17:57:34.154: INFO: (4) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 7.322852ms)
Sep 17 17:57:34.154: INFO: (4) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 7.412374ms)
Sep 17 17:57:34.154: INFO: (4) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 7.527584ms)
Sep 17 17:57:34.155: INFO: (4) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 7.791075ms)
Sep 17 17:57:34.155: INFO: (4) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 7.960079ms)
Sep 17 17:57:34.155: INFO: (4) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 8.330312ms)
Sep 17 17:57:34.157: INFO: (4) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 9.560074ms)
Sep 17 17:57:34.163: INFO: (5) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 5.777883ms)
Sep 17 17:57:34.163: INFO: (5) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 6.730213ms)
Sep 17 17:57:34.164: INFO: (5) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.86413ms)
Sep 17 17:57:34.164: INFO: (5) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 6.868991ms)
Sep 17 17:57:34.164: INFO: (5) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 7.474971ms)
Sep 17 17:57:34.164: INFO: (5) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 7.296935ms)
Sep 17 17:57:34.164: INFO: (5) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 7.398614ms)
Sep 17 17:57:34.165: INFO: (5) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 7.80105ms)
Sep 17 17:57:34.166: INFO: (5) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 8.922903ms)
Sep 17 17:57:34.167: INFO: (5) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 10.34638ms)
Sep 17 17:57:34.168: INFO: (5) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 10.892137ms)
Sep 17 17:57:34.168: INFO: (5) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 11.027675ms)
Sep 17 17:57:34.168: INFO: (5) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 10.998342ms)
Sep 17 17:57:34.168: INFO: (5) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 11.169502ms)
Sep 17 17:57:34.172: INFO: (6) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 7.415968ms)
Sep 17 17:57:34.176: INFO: (6) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 7.74521ms)
Sep 17 17:57:34.176: INFO: (6) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 7.736424ms)
Sep 17 17:57:34.177: INFO: (6) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 8.06494ms)
Sep 17 17:57:34.177: INFO: (6) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 8.268893ms)
Sep 17 17:57:34.177: INFO: (6) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 8.539243ms)
Sep 17 17:57:34.177: INFO: (6) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 8.596771ms)
Sep 17 17:57:34.178: INFO: (6) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 9.147927ms)
Sep 17 17:57:34.178: INFO: (6) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 9.193256ms)
Sep 17 17:57:34.183: INFO: (7) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 5.260667ms)
Sep 17 17:57:34.186: INFO: (7) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 8.322259ms)
Sep 17 17:57:34.187: INFO: (7) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 8.116884ms)
Sep 17 17:57:34.187: INFO: (7) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 8.777434ms)
Sep 17 17:57:34.187: INFO: (7) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 8.690219ms)
Sep 17 17:57:34.187: INFO: (7) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 8.96461ms)
Sep 17 17:57:34.188: INFO: (7) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 9.309269ms)
Sep 17 17:57:34.188: INFO: (7) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 9.520827ms)
Sep 17 17:57:34.188: INFO: (7) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 9.552116ms)
Sep 17 17:57:34.188: INFO: (7) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 9.662638ms)
Sep 17 17:57:34.188: INFO: (7) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 9.935721ms)
Sep 17 17:57:34.189: INFO: (7) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 10.327026ms)
Sep 17 17:57:34.189: INFO: (7) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 10.774667ms)
Sep 17 17:57:34.193: INFO: (8) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 3.265303ms)
Sep 17 17:57:34.195: INFO: (8) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 5.404104ms)
Sep 17 17:57:34.195: INFO: (8) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 5.319563ms)
Sep 17 17:57:34.195: INFO: (8) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 5.729631ms)
Sep 17 17:57:34.196: INFO: (8) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 7.13606ms)
Sep 17 17:57:34.197: INFO: (8) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 7.108332ms)
Sep 17 17:57:34.197: INFO: (8) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 7.229555ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 7.765052ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 8.153416ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 7.889087ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 7.927674ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 8.133208ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 8.176368ms)
Sep 17 17:57:34.198: INFO: (8) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 8.394742ms)
Sep 17 17:57:34.202: INFO: (9) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 3.548085ms)
Sep 17 17:57:34.203: INFO: (9) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 4.202614ms)
Sep 17 17:57:34.203: INFO: (9) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 4.451293ms)
Sep 17 17:57:34.204: INFO: (9) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 5.513774ms)
Sep 17 17:57:34.204: INFO: (9) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 5.867451ms)
Sep 17 17:57:34.205: INFO: (9) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 6.276325ms)
Sep 17 17:57:34.205: INFO: (9) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.698946ms)
Sep 17 17:57:34.205: INFO: (9) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 6.949618ms)
Sep 17 17:57:34.206: INFO: (9) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 7.147098ms)
Sep 17 17:57:34.206: INFO: (9) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 7.420539ms)
Sep 17 17:57:34.206: INFO: (9) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 7.270867ms)
Sep 17 17:57:34.206: INFO: (9) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 7.403035ms)
Sep 17 17:57:34.206: INFO: (9) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 7.56677ms)
Sep 17 17:57:34.207: INFO: (9) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 8.40649ms)
Sep 17 17:57:34.207: INFO: (9) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 8.594706ms)
Sep 17 17:57:34.211: INFO: (10) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 3.722138ms)
Sep 17 17:57:34.212: INFO: (10) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 4.8416ms)
Sep 17 17:57:34.213: INFO: (10) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 4.916362ms)
Sep 17 17:57:34.213: INFO: (10) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 5.470845ms)
Sep 17 17:57:34.213: INFO: (10) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 5.847812ms)
Sep 17 17:57:34.214: INFO: (10) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.448299ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 6.734921ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 6.941819ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 7.320742ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 7.026159ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 7.240423ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 7.300428ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 7.643487ms)
Sep 17 17:57:34.215: INFO: (10) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 7.502815ms)
Sep 17 17:57:34.216: INFO: (10) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 8.267372ms)
Sep 17 17:57:34.220: INFO: (11) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 3.429645ms)
Sep 17 17:57:34.222: INFO: (11) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.038633ms)
Sep 17 17:57:34.223: INFO: (11) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 6.265267ms)
Sep 17 17:57:34.223: INFO: (11) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.239967ms)
Sep 17 17:57:34.223: INFO: (11) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.420499ms)
Sep 17 17:57:34.223: INFO: (11) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 7.362535ms)
Sep 17 17:57:34.224: INFO: (11) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 7.461052ms)
Sep 17 17:57:34.224: INFO: (11) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 7.563928ms)
Sep 17 17:57:34.224: INFO: (11) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 7.855611ms)
Sep 17 17:57:34.225: INFO: (11) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 8.166007ms)
Sep 17 17:57:34.225: INFO: (11) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 8.17595ms)
Sep 17 17:57:34.225: INFO: (11) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 8.824299ms)
Sep 17 17:57:34.226: INFO: (11) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 8.745781ms)
Sep 17 17:57:34.229: INFO: (12) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 3.024918ms)
Sep 17 17:57:34.230: INFO: (12) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 3.733002ms)
Sep 17 17:57:34.231: INFO: (12) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 4.865384ms)
Sep 17 17:57:34.231: INFO: (12) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 5.136585ms)
Sep 17 17:57:34.231: INFO: (12) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 5.235596ms)
Sep 17 17:57:34.231: INFO: (12) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 5.486012ms)
Sep 17 17:57:34.232: INFO: (12) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 5.85746ms)
Sep 17 17:57:34.232: INFO: (12) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 5.906381ms)
Sep 17 17:57:34.232: INFO: (12) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 6.081788ms)
Sep 17 17:57:34.233: INFO: (12) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 6.56708ms)
Sep 17 17:57:34.233: INFO: (12) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.810752ms)
Sep 17 17:57:34.233: INFO: (12) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.989288ms)
Sep 17 17:57:34.234: INFO: (12) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 7.620178ms)
Sep 17 17:57:34.234: INFO: (12) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 7.798159ms)
Sep 17 17:57:34.234: INFO: (12) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 7.797394ms)
Sep 17 17:57:34.242: INFO: (13) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 7.906929ms)
Sep 17 17:57:34.242: INFO: (13) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 7.819414ms)
Sep 17 17:57:34.243: INFO: (13) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 8.530692ms)
Sep 17 17:57:34.243: INFO: (13) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 8.507198ms)
Sep 17 17:57:34.243: INFO: (13) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 9.155436ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 8.968557ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 9.387381ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 9.586813ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 9.770515ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 9.898131ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 9.797278ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 9.827829ms)
Sep 17 17:57:34.244: INFO: (13) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 10.026737ms)
Sep 17 17:57:34.245: INFO: (13) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 10.095281ms)
Sep 17 17:57:34.245: INFO: (13) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 10.328731ms)
Sep 17 17:57:34.250: INFO: (14) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 5.679879ms)
Sep 17 17:57:34.251: INFO: (14) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 5.98986ms)
Sep 17 17:57:34.251: INFO: (14) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.071697ms)
Sep 17 17:57:34.251: INFO: (14) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 6.113013ms)
Sep 17 17:57:34.251: INFO: (14) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.301593ms)
Sep 17 17:57:34.253: INFO: (14) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 7.553089ms)
Sep 17 17:57:34.253: INFO: (14) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 8.347208ms)
Sep 17 17:57:34.253: INFO: (14) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 8.530796ms)
Sep 17 17:57:34.254: INFO: (14) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 8.517526ms)
Sep 17 17:57:34.254: INFO: (14) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 8.707061ms)
Sep 17 17:57:34.254: INFO: (14) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 8.832912ms)
Sep 17 17:57:34.255: INFO: (14) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 9.170198ms)
Sep 17 17:57:34.255: INFO: (14) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 9.42971ms)
Sep 17 17:57:34.255: INFO: (14) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 9.186391ms)
Sep 17 17:57:34.255: INFO: (14) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 9.606353ms)
Sep 17 17:57:34.258: INFO: (15) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 3.02859ms)
Sep 17 17:57:34.259: INFO: (15) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 3.301372ms)
Sep 17 17:57:34.259: INFO: (15) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 4.055937ms)
Sep 17 17:57:34.260: INFO: (15) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 4.908584ms)
Sep 17 17:57:34.260: INFO: (15) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname1/proxy/: foo (200; 4.911208ms)
Sep 17 17:57:34.260: INFO: (15) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 5.19473ms)
Sep 17 17:57:34.261: INFO: (15) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 6.025372ms)
Sep 17 17:57:34.261: INFO: (15) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 6.370723ms)
Sep 17 17:57:34.262: INFO: (15) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 6.346455ms)
Sep 17 17:57:34.262: INFO: (15) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 6.759982ms)
Sep 17 17:57:34.262: INFO: (15) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 6.642606ms)
Sep 17 17:57:34.262: INFO: (15) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 3.673189ms)
Sep 17 17:57:34.268: INFO: (16) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 4.002171ms)
Sep 17 17:57:34.268: INFO: (16) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 4.636735ms)
Sep 17 17:57:34.269: INFO: (16) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 5.29758ms)
Sep 17 17:57:34.269: INFO: (16) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 5.55601ms)
Sep 17 17:57:34.269: INFO: (16) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 5.722277ms)
Sep 17 17:57:34.269: INFO: (16) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 5.794634ms)
Sep 17 17:57:34.269: INFO: (16) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 6.113915ms)
Sep 17 17:57:34.270: INFO: (16) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname2/proxy/: bar (200; 6.250768ms)
Sep 17 17:57:34.270: INFO: (16) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname1/proxy/: tls baz (200; 6.329489ms)
Sep 17 17:57:34.270: INFO: (16) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 6.294725ms)
Sep 17 17:57:34.271: INFO: (16) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 28.332588ms)
Sep 17 17:57:34.301: INFO: (17) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 28.666525ms)
Sep 17 17:57:34.301: INFO: (17) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 29.167627ms)
Sep 17 17:57:34.301: INFO: (17) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 29.303444ms)
Sep 17 17:57:34.302: INFO: (17) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 29.442395ms)
Sep 17 17:57:34.302: INFO: (17) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 30.197034ms)
Sep 17 17:57:34.303: INFO: (17) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 30.532717ms)
Sep 17 17:57:34.302: INFO: (17) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 30.484288ms)
Sep 17 17:57:34.303: INFO: (17) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 30.596423ms)
Sep 17 17:57:34.303: INFO: (17) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test (200; 14.692747ms)
Sep 17 17:57:34.321: INFO: (18) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 14.863804ms)
Sep 17 17:57:34.321: INFO: (18) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 14.274693ms)
Sep 17 17:57:34.321: INFO: (18) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:1080/proxy/: test<... (200; 14.615905ms)
Sep 17 17:57:34.321: INFO: (18) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 14.766458ms)
Sep 17 17:57:34.321: INFO: (18) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 15.336427ms)
Sep 17 17:57:34.322: INFO: (18) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: test<... (200; 20.924667ms)
Sep 17 17:57:34.344: INFO: (19) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:462/proxy/: tls qux (200; 21.355946ms)
Sep 17 17:57:34.344: INFO: (19) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:1080/proxy/: ... (200; 21.672388ms)
Sep 17 17:57:34.344: INFO: (19) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:460/proxy/: tls baz (200; 21.593458ms)
Sep 17 17:57:34.345: INFO: (19) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 22.354726ms)
Sep 17 17:57:34.345: INFO: (19) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 22.519107ms)
Sep 17 17:57:34.345: INFO: (19) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:162/proxy/: bar (200; 22.721385ms)
Sep 17 17:57:34.345: INFO: (19) /api/v1/namespaces/proxy-5965/pods/http:proxy-service-8m8b9-nmdkb:160/proxy/: foo (200; 22.669345ms)
Sep 17 17:57:34.346: INFO: (19) /api/v1/namespaces/proxy-5965/services/proxy-service-8m8b9:portname1/proxy/: foo (200; 22.8475ms)
Sep 17 17:57:34.346: INFO: (19) /api/v1/namespaces/proxy-5965/services/http:proxy-service-8m8b9:portname2/proxy/: bar (200; 23.192535ms)
Sep 17 17:57:34.347: INFO: (19) /api/v1/namespaces/proxy-5965/pods/proxy-service-8m8b9-nmdkb/proxy/: test (200; 24.236302ms)
Sep 17 17:57:34.347: INFO: (19) /api/v1/namespaces/proxy-5965/services/https:proxy-service-8m8b9:tlsportname2/proxy/: tls qux (200; 24.277903ms)
Sep 17 17:57:34.347: INFO: (19) /api/v1/namespaces/proxy-5965/pods/https:proxy-service-8m8b9-nmdkb:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Sep 17 17:57:47.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c" in namespace "projected-7573" to be "success or failure"
Sep 17 17:57:47.854: INFO: Pod "downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.424327ms
Sep 17 17:57:49.861: INFO: Pod "downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016527693s
Sep 17 17:57:51.867: INFO: Pod "downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02266848s
STEP: Saw pod success
Sep 17 17:57:51.867: INFO: Pod "downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c" satisfied condition "success or failure"
Sep 17 17:57:51.872: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c container client-container: 
STEP: delete the pod
Sep 17 17:57:51.890: INFO: Waiting for pod downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c to disappear
Sep 17 17:57:51.895: INFO: Pod downwardapi-volume-3940a010-57de-4b3a-822d-7469dc74992c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:57:51.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7573" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:57:51.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Sep 17 17:57:52.254: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 17 17:57:52.294: INFO: Waiting for terminating namespaces to be deleted...
Sep 17 17:57:52.297: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Sep 17 17:57:52.308: INFO: kindnet-m6c7w from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded)
Sep 17 17:57:52.308: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 17 17:57:52.308: INFO: kube-proxy-4jmbs from kube-system started at 2020-09-13 16:54:28 +0000 UTC (1 container statuses recorded)
Sep 17 17:57:52.308: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 17 17:57:52.308: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Sep 17 17:57:52.316: INFO: kindnet-4ckzg from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded)
Sep 17 17:57:52.316: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 17 17:57:52.316: INFO: kube-proxy-2w9xp from kube-system started at 2020-09-13 16:54:31 +0000 UTC (1 container statuses recorded)
Sep 17 17:57:52.316: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Sep 17 17:57:52.412: INFO: Pod kindnet-4ckzg requesting resource cpu=100m on Node jerma-worker2
Sep 17 17:57:52.412: INFO: Pod kindnet-m6c7w requesting resource cpu=100m on Node jerma-worker
Sep 17 17:57:52.412: INFO: Pod kube-proxy-2w9xp requesting resource cpu=0m on Node jerma-worker2
Sep 17 17:57:52.413: INFO: Pod kube-proxy-4jmbs requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Sep 17 17:57:52.413: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
Sep 17 17:57:52.421: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0.1635a3eb06e7e476], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6842/filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0.1635a3eb55f3427f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0.1635a3eba891acaa], Reason = [Created], Message = [Created container filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0.1635a3ebb897d68f], Reason = [Started], Message = [Started container filler-pod-6c594ea9-b5f1-4055-abc8-c2a9b93a8ff0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f.1635a3eb08577264], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6842/filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f.1635a3eb8cdd291b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f.1635a3ebccf952de], Reason = [Created], Message = [Created container filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f.1635a3ebdc5ee160], Reason = [Started], Message = [Started container filler-pod-7273d8d7-ab08-4c0d-80d4-c2b5db587b5f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1635a3ebf8e26503], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:57:57.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6842" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:5.674 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":261,"skipped":4264,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:57:57.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:57:57.685: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954" in namespace "security-context-test-5309" to be "success or failure"
Sep 17 17:57:57.702: INFO: Pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954": Phase="Pending", Reason="", readiness=false. Elapsed: 17.106736ms
Sep 17 17:57:59.708: INFO: Pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023581123s
Sep 17 17:58:01.715: INFO: Pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030507457s
Sep 17 17:58:01.716: INFO: Pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954" satisfied condition "success or failure"
Sep 17 17:58:01.724: INFO: Got logs for pod "busybox-privileged-false-39dedfb7-53ee-4555-b6de-feb698e84954": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:58:01.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5309" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4274,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:58:01.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-f672adf6-4c06-4327-aeae-7113d926036d
STEP: Creating a pod to test consume configMaps
Sep 17 17:58:01.826: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136" in namespace "projected-4855" to be "success or failure"
Sep 17 17:58:01.889: INFO: Pod "pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136": Phase="Pending", Reason="", readiness=false. Elapsed: 63.627739ms
Sep 17 17:58:04.193: INFO: Pod "pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.367571753s
Sep 17 17:58:06.201: INFO: Pod "pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.374817224s
STEP: Saw pod success
Sep 17 17:58:06.201: INFO: Pod "pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136" satisfied condition "success or failure"
Sep 17 17:58:06.205: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 17 17:58:06.227: INFO: Waiting for pod pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136 to disappear
Sep 17 17:58:06.246: INFO: Pod pod-projected-configmaps-927b1c6c-eebb-44fa-9a84-9be47382d136 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:58:06.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4855" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4294,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:58:06.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Sep 17 17:58:06.363: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Sep 17 17:58:06.377: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:06.380: INFO: Number of nodes with available pods: 0
Sep 17 17:58:06.380: INFO: Node jerma-worker is running more than one daemon pod
Sep 17 17:58:07.390: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:07.395: INFO: Number of nodes with available pods: 0
Sep 17 17:58:07.396: INFO: Node jerma-worker is running more than one daemon pod
Sep 17 17:58:08.389: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:08.395: INFO: Number of nodes with available pods: 0
Sep 17 17:58:08.395: INFO: Node jerma-worker is running more than one daemon pod
Sep 17 17:58:09.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:09.596: INFO: Number of nodes with available pods: 0
Sep 17 17:58:09.596: INFO: Node jerma-worker is running more than one daemon pod
Sep 17 17:58:10.390: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:10.423: INFO: Number of nodes with available pods: 2
Sep 17 17:58:10.423: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Sep 17 17:58:10.476: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:10.476: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:10.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:11.678: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:11.678: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:11.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:12.518: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:12.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:12.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:13.519: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:13.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:13.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:14.520: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:14.520: INFO: Pod daemon-set-kqbbs is not available
Sep 17 17:58:14.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:14.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:15.519: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:15.519: INFO: Pod daemon-set-kqbbs is not available
Sep 17 17:58:15.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:15.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:16.519: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:16.520: INFO: Pod daemon-set-kqbbs is not available
Sep 17 17:58:16.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:16.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:17.520: INFO: Wrong image for pod: daemon-set-kqbbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:17.520: INFO: Pod daemon-set-kqbbs is not available
Sep 17 17:58:17.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:17.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:18.519: INFO: Pod daemon-set-m896w is not available
Sep 17 17:58:18.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:18.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:19.520: INFO: Pod daemon-set-m896w is not available
Sep 17 17:58:19.521: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:19.529: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:20.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:20.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:21.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:21.529: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:22.521: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:22.521: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:22.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:23.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:23.519: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:23.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:24.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:24.520: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:24.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:25.520: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:25.520: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:25.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:26.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:26.519: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:26.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:27.519: INFO: Wrong image for pod: daemon-set-z2bvs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Sep 17 17:58:27.519: INFO: Pod daemon-set-z2bvs is not available
Sep 17 17:58:27.525: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:28.520: INFO: Pod daemon-set-j62pk is not available
Sep 17 17:58:28.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Sep 17 17:58:28.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:28.544: INFO: Number of nodes with available pods: 1
Sep 17 17:58:28.544: INFO: Node jerma-worker2 is running more than one daemon pod
Sep 17 17:58:29.554: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:29.560: INFO: Number of nodes with available pods: 1
Sep 17 17:58:29.560: INFO: Node jerma-worker2 is running more than one daemon pod
Sep 17 17:58:30.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:30.557: INFO: Number of nodes with available pods: 1
Sep 17 17:58:30.557: INFO: Node jerma-worker2 is running more than one daemon pod
Sep 17 17:58:31.554: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 17 17:58:31.560: INFO: Number of nodes with available pods: 2
Sep 17 17:58:31.560: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-392, will wait for the garbage collector to delete the pods
Sep 17 17:58:31.647: INFO: Deleting DaemonSet.extensions daemon-set took: 7.791907ms
Sep 17 17:58:32.048: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.118617ms
Sep 17 17:58:47.854: INFO: Number of nodes with available pods: 0
Sep 17 17:58:47.854: INFO: Number of running nodes: 0, number of available pods: 0
Sep 17 17:58:47.860: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-392/daemonsets","resourceVersion":"1091138"},"items":null}

Sep 17 17:58:47.865: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-392/pods","resourceVersion":"1091138"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:58:47.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-392" for this suite.

• [SLOW TEST:41.636 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":264,"skipped":4298,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:58:47.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:59:04.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5407" for this suite.

• [SLOW TEST:16.364 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":265,"skipped":4358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:59:04.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-1d75fa1e-2f5c-415b-bfa0-0d6a66800d65
STEP: Creating secret with name secret-projected-all-test-volume-ced7a557-8842-4209-8b40-89f6feb27d52
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 17 17:59:04.369: INFO: Waiting up to 5m0s for pod "projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819" in namespace "projected-2235" to be "success or failure"
Sep 17 17:59:04.376: INFO: Pod "projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819": Phase="Pending", Reason="", readiness=false. Elapsed: 7.213245ms
Sep 17 17:59:06.383: INFO: Pod "projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013613081s
Sep 17 17:59:08.389: INFO: Pod "projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020266756s
STEP: Saw pod success
Sep 17 17:59:08.389: INFO: Pod "projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819" satisfied condition "success or failure"
Sep 17 17:59:08.394: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819 container projected-all-volume-test: 
STEP: delete the pod
Sep 17 17:59:08.433: INFO: Waiting for pod projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819 to disappear
Sep 17 17:59:08.464: INFO: Pod projected-volume-0c329eec-91d0-45f4-b2c3-31a2939f6819 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:59:08.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2235" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4385,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:59:08.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Sep 17 17:59:08.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3066'
Sep 17 17:59:10.092: INFO: stderr: ""
Sep 17 17:59:10.093: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Sep 17 17:59:11.173: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:11.173: INFO: Found 0 / 1
Sep 17 17:59:12.100: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:12.101: INFO: Found 0 / 1
Sep 17 17:59:13.101: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:13.101: INFO: Found 0 / 1
Sep 17 17:59:14.099: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:14.099: INFO: Found 1 / 1
Sep 17 17:59:14.100: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Sep 17 17:59:14.105: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:14.106: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 17 17:59:14.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-fhrzm --namespace=kubectl-3066 -p {"metadata":{"annotations":{"x":"y"}}}'
Sep 17 17:59:15.188: INFO: stderr: ""
Sep 17 17:59:15.188: INFO: stdout: "pod/agnhost-master-fhrzm patched\n"
STEP: checking annotations
Sep 17 17:59:15.217: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 17:59:15.218: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:59:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3066" for this suite.

• [SLOW TEST:6.756 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":267,"skipped":4405,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:59:15.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Sep 17 17:59:15.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7299'
Sep 17 17:59:16.467: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 17 17:59:16.467: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Sep 17 17:59:16.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7299'
Sep 17 17:59:17.605: INFO: stderr: ""
Sep 17 17:59:17.605: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 17:59:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7299" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":268,"skipped":4406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 17:59:17.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Sep 17 17:59:18.195: INFO: >>> kubeConfig: /root/.kube/config
Sep 17 17:59:36.243: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:00:39.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3715" for this suite.

• [SLOW TEST:82.023 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":269,"skipped":4431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:00:39.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 18:00:45.487: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep 17 18:00:47.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962445, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962445, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962446, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735962445, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 18:00:50.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:00:50.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9412" for this suite.
STEP: Destroying namespace "webhook-9412-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.118 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":270,"skipped":4491,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:00:50.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 17 18:00:50.870: INFO: Waiting up to 5m0s for pod "pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f" in namespace "emptydir-8330" to be "success or failure"
Sep 17 18:00:50.876: INFO: Pod "pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.852262ms
Sep 17 18:00:52.892: INFO: Pod "pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021677788s
Sep 17 18:00:54.899: INFO: Pod "pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028402009s
STEP: Saw pod success
Sep 17 18:00:54.899: INFO: Pod "pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f" satisfied condition "success or failure"
Sep 17 18:00:54.904: INFO: Trying to get logs from node jerma-worker2 pod pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f container test-container: 
STEP: delete the pod
Sep 17 18:00:54.938: INFO: Waiting for pod pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f to disappear
Sep 17 18:00:54.981: INFO: Pod pod-aaf4692b-fb9d-4a86-9e46-ad19b1c3b90f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:00:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8330" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4504,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:00:54.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-a0a30184-e50d-4031-82ad-59710f41f6a7
STEP: Creating a pod to test consume configMaps
Sep 17 18:00:55.059: INFO: Waiting up to 5m0s for pod "pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9" in namespace "configmap-2118" to be "success or failure"
Sep 17 18:00:55.062: INFO: Pod "pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221731ms
Sep 17 18:00:57.114: INFO: Pod "pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055762006s
Sep 17 18:00:59.121: INFO: Pod "pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062369357s
STEP: Saw pod success
Sep 17 18:00:59.121: INFO: Pod "pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9" satisfied condition "success or failure"
Sep 17 18:00:59.138: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9 container configmap-volume-test: 
STEP: delete the pod
Sep 17 18:00:59.199: INFO: Waiting for pod pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9 to disappear
Sep 17 18:00:59.227: INFO: Pod pod-configmaps-20137eda-20fb-492c-9aed-2afd1e370dc9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:00:59.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2118" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4504,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:00:59.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-lk5f
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 18:00:59.345: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lk5f" in namespace "subpath-7347" to be "success or failure"
Sep 17 18:00:59.367: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.765478ms
Sep 17 18:01:01.447: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101652463s
Sep 17 18:01:03.455: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.109281394s
Sep 17 18:01:05.462: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 6.116404083s
Sep 17 18:01:07.468: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 8.123179666s
Sep 17 18:01:09.475: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 10.129239517s
Sep 17 18:01:11.481: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 12.13571142s
Sep 17 18:01:13.487: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 14.141665327s
Sep 17 18:01:15.494: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 16.148643977s
Sep 17 18:01:17.500: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 18.155140973s
Sep 17 18:01:19.508: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 20.16237087s
Sep 17 18:01:21.515: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Running", Reason="", readiness=true. Elapsed: 22.169301447s
Sep 17 18:01:23.522: INFO: Pod "pod-subpath-test-projected-lk5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.176722953s
STEP: Saw pod success
Sep 17 18:01:23.522: INFO: Pod "pod-subpath-test-projected-lk5f" satisfied condition "success or failure"
Sep 17 18:01:23.527: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-lk5f container test-container-subpath-projected-lk5f: 
STEP: delete the pod
Sep 17 18:01:23.557: INFO: Waiting for pod pod-subpath-test-projected-lk5f to disappear
Sep 17 18:01:23.590: INFO: Pod pod-subpath-test-projected-lk5f no longer exists
STEP: Deleting pod pod-subpath-test-projected-lk5f
Sep 17 18:01:23.590: INFO: Deleting pod "pod-subpath-test-projected-lk5f" in namespace "subpath-7347"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:01:23.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7347" for this suite.

• [SLOW TEST:24.368 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":273,"skipped":4515,"failed":0}
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:01:23.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Sep 17 18:01:28.770: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:01:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1693" for this suite.

• [SLOW TEST:5.273 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":274,"skipped":4515,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:01:28.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Sep 17 18:01:28.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3803'
Sep 17 18:01:37.172: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 17 18:01:37.173: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Sep 17 18:01:39.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3803'
Sep 17 18:01:40.481: INFO: stderr: ""
Sep 17 18:01:40.481: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:01:40.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3803" for this suite.

• [SLOW TEST:11.618 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625
    should create a deployment from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":275,"skipped":4527,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:01:40.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Sep 17 18:01:40.554: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 17 18:01:40.620: INFO: Waiting for terminating namespaces to be deleted...
Sep 17 18:01:40.624: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Sep 17 18:01:40.637: INFO: kube-proxy-4jmbs from kube-system started at 2020-09-13 16:54:28 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.637: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 17 18:01:40.637: INFO: kindnet-m6c7w from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.637: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 17 18:01:40.637: INFO: e2e-test-httpd-deployment-594dddd44f-9m848 from kubectl-3803 started at 2020-09-17 18:01:37 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.637: INFO: 	Container e2e-test-httpd-deployment ready: false, restart count 0
Sep 17 18:01:40.638: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Sep 17 18:01:40.649: INFO: kube-proxy-2w9xp from kube-system started at 2020-09-13 16:54:31 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.649: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 17 18:01:40.649: INFO: kindnet-4ckzg from kube-system started at 2020-09-13 16:54:34 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.649: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 17 18:01:40.649: INFO: pod-adoption-release from replicaset-1693 started at 2020-09-17 18:01:23 +0000 UTC (1 container statuses recorded)
Sep 17 18:01:40.649: INFO: 	Container pod-adoption-release ready: false, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-65bf2881-5640-44a4-9065-79b9604d1146 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-65bf2881-5640-44a4-9065-79b9604d1146 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-65bf2881-5640-44a4-9065-79b9604d1146
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:01:48.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-386" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.345 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":276,"skipped":4536,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:01:48.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Sep 17 18:01:48.921: INFO: Created pod &Pod{ObjectMeta:{dns-9620  dns-9620 /api/v1/namespaces/dns-9620/pods/dns-9620 9be00f84-8e60-40c7-b66f-3f0e1fdfa97b 1092110 0 2020-09-17 18:01:48 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rmx5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rmx5m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rmx5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Sep 17 18:01:52.967: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9620 PodName:dns-9620 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 18:01:52.967: INFO: >>> kubeConfig: /root/.kube/config
I0917 18:01:53.076314       7 log.go:172] (0xaa972d0) (0xaa97340) Create stream
I0917 18:01:53.076483       7 log.go:172] (0xaa972d0) (0xaa97340) Stream added, broadcasting: 1
I0917 18:01:53.086112       7 log.go:172] (0xaa972d0) Reply frame received for 1
I0917 18:01:53.086421       7 log.go:172] (0xaa972d0) (0xad76070) Create stream
I0917 18:01:53.086529       7 log.go:172] (0xaa972d0) (0xad76070) Stream added, broadcasting: 3
I0917 18:01:53.088369       7 log.go:172] (0xaa972d0) Reply frame received for 3
I0917 18:01:53.088547       7 log.go:172] (0xaa972d0) (0xa51e7e0) Create stream
I0917 18:01:53.088628       7 log.go:172] (0xaa972d0) (0xa51e7e0) Stream added, broadcasting: 5
I0917 18:01:53.090152       7 log.go:172] (0xaa972d0) Reply frame received for 5
I0917 18:01:53.176106       7 log.go:172] (0xaa972d0) Data frame received for 3
I0917 18:01:53.176448       7 log.go:172] (0xad76070) (3) Data frame handling
I0917 18:01:53.176750       7 log.go:172] (0xad76070) (3) Data frame sent
I0917 18:01:53.177002       7 log.go:172] (0xaa972d0) Data frame received for 3
I0917 18:01:53.177206       7 log.go:172] (0xad76070) (3) Data frame handling
I0917 18:01:53.177413       7 log.go:172] (0xaa972d0) Data frame received for 5
I0917 18:01:53.177536       7 log.go:172] (0xa51e7e0) (5) Data frame handling
I0917 18:01:53.178499       7 log.go:172] (0xaa972d0) Data frame received for 1
I0917 18:01:53.178627       7 log.go:172] (0xaa97340) (1) Data frame handling
I0917 18:01:53.178845       7 log.go:172] (0xaa97340) (1) Data frame sent
I0917 18:01:53.178991       7 log.go:172] (0xaa972d0) (0xaa97340) Stream removed, broadcasting: 1
I0917 18:01:53.179157       7 log.go:172] (0xaa972d0) Go away received
I0917 18:01:53.179927       7 log.go:172] (0xaa972d0) (0xaa97340) Stream removed, broadcasting: 1
I0917 18:01:53.180283       7 log.go:172] (0xaa972d0) (0xad76070) Stream removed, broadcasting: 3
I0917 18:01:53.180429       7 log.go:172] (0xaa972d0) (0xa51e7e0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Sep 17 18:01:53.181: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9620 PodName:dns-9620 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 17 18:01:53.181: INFO: >>> kubeConfig: /root/.kube/config
I0917 18:01:53.289822       7 log.go:172] (0xaa97ab0) (0xaa97b20) Create stream
I0917 18:01:53.289979       7 log.go:172] (0xaa97ab0) (0xaa97b20) Stream added, broadcasting: 1
I0917 18:01:53.294334       7 log.go:172] (0xaa97ab0) Reply frame received for 1
I0917 18:01:53.294605       7 log.go:172] (0xaa97ab0) (0xaa97ce0) Create stream
I0917 18:01:53.294754       7 log.go:172] (0xaa97ab0) (0xaa97ce0) Stream added, broadcasting: 3
I0917 18:01:53.297069       7 log.go:172] (0xaa97ab0) Reply frame received for 3
I0917 18:01:53.297270       7 log.go:172] (0xaa97ab0) (0xa51e9a0) Create stream
I0917 18:01:53.297390       7 log.go:172] (0xaa97ab0) (0xa51e9a0) Stream added, broadcasting: 5
I0917 18:01:53.298948       7 log.go:172] (0xaa97ab0) Reply frame received for 5
I0917 18:01:53.366742       7 log.go:172] (0xaa97ab0) Data frame received for 3
I0917 18:01:53.366963       7 log.go:172] (0xaa97ce0) (3) Data frame handling
I0917 18:01:53.367145       7 log.go:172] (0xaa97ce0) (3) Data frame sent
I0917 18:01:53.367473       7 log.go:172] (0xaa97ab0) Data frame received for 3
I0917 18:01:53.367658       7 log.go:172] (0xaa97ce0) (3) Data frame handling
I0917 18:01:53.367855       7 log.go:172] (0xaa97ab0) Data frame received for 5
I0917 18:01:53.367991       7 log.go:172] (0xa51e9a0) (5) Data frame handling
I0917 18:01:53.368905       7 log.go:172] (0xaa97ab0) Data frame received for 1
I0917 18:01:53.369096       7 log.go:172] (0xaa97b20) (1) Data frame handling
I0917 18:01:53.369277       7 log.go:172] (0xaa97b20) (1) Data frame sent
I0917 18:01:53.369465       7 log.go:172] (0xaa97ab0) (0xaa97b20) Stream removed, broadcasting: 1
I0917 18:01:53.369622       7 log.go:172] (0xaa97ab0) Go away received
I0917 18:01:53.370128       7 log.go:172] (0xaa97ab0) (0xaa97b20) Stream removed, broadcasting: 1
I0917 18:01:53.370308       7 log.go:172] (0xaa97ab0) (0xaa97ce0) Stream removed, broadcasting: 3
I0917 18:01:53.370513       7 log.go:172] (0xaa97ab0) (0xa51e9a0) Stream removed, broadcasting: 5
Sep 17 18:01:53.370: INFO: Deleting pod dns-9620...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:01:53.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9620" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":277,"skipped":4552,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep 17 18:01:53.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Sep 17 18:02:28.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4717" for this suite.

• [SLOW TEST:34.806 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4560,"failed":0}
SSSSSSSep 17 18:02:28.285: INFO: Running AfterSuite actions on all nodes
Sep 17 18:02:28.286: INFO: Running AfterSuite actions on node 1
Sep 17 18:02:28.286: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 5679.461 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS