I0820 22:47:20.680535 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0820 22:47:20.680751 6 e2e.go:109] Starting e2e run "1e3aa255-819f-4cf1-9e7c-20a3bc22599f" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597963639 - Will randomize all specs Will run 278 of 4844 specs Aug 20 22:47:20.732: INFO: >>> kubeConfig: /root/.kube/config Aug 20 22:47:20.736: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 20 22:47:20.762: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 20 22:47:20.797: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 20 22:47:20.797: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 20 22:47:20.797: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 20 22:47:20.805: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 20 22:47:20.805: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 20 22:47:20.805: INFO: e2e test version: v1.17.11 Aug 20 22:47:20.806: INFO: kube-apiserver version: v1.17.5 Aug 20 22:47:20.806: INFO: >>> kubeConfig: /root/.kube/config Aug 20 22:47:20.811: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:20.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Aug 20 22:47:20.966: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 22:47:21.655: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 22:47:23.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560441, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560441, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560441, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560441, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 22:47:26.759: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 22:47:26.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4111-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1435" for this suite. STEP: Destroying namespace "webhook-1435-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.791 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:27.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 20 22:47:27.708: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8731" for this suite. • [SLOW TEST:8.417 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":2,"skipped":79,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:36.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-05dea49c-e5b9-4276-ae98-441f5d3a8a3e STEP: Creating a pod to test consume secrets Aug 20 22:47:36.131: INFO: Waiting up to 5m0s for pod "pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381" in namespace "secrets-4008" to be "success or failure" Aug 20 22:47:36.135: INFO: Pod "pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9065ms Aug 20 22:47:38.138: INFO: Pod "pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006782396s Aug 20 22:47:40.142: INFO: Pod "pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010752276s STEP: Saw pod success Aug 20 22:47:40.142: INFO: Pod "pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381" satisfied condition "success or failure" Aug 20 22:47:40.144: INFO: Trying to get logs from node jerma-worker pod pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381 container secret-volume-test: STEP: delete the pod Aug 20 22:47:40.387: INFO: Waiting for pod pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381 to disappear Aug 20 22:47:40.398: INFO: Pod pod-secrets-18a270a7-5165-48c4-a5b9-8ed520de3381 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:40.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4008" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":80,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:40.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-31e2a212-6fff-4d2a-837f-85fff769c422 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:46.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6003" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":85,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:46.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 20 22:47:46.722: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 20 22:47:46.734: INFO: Waiting for terminating namespaces to be deleted... Aug 20 22:47:46.736: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 20 22:47:46.742: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.742: INFO: Container app ready: true, restart count 0 Aug 20 22:47:46.742: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.742: INFO: Container kube-proxy ready: true, restart count 0 Aug 20 22:47:46.742: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.742: INFO: Container kindnet-cni ready: true, restart count 0 Aug 20 22:47:46.742: INFO: rally-ad767070-21p7qcsn from c-rally-ad767070-eq6hqcpx started at 2020-08-20 22:47:36 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.742: INFO: Container rally-ad767070-21p7qcsn ready: false, restart count 0 Aug 20 22:47:46.742: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 20 22:47:46.747: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.747: INFO: Container kube-proxy ready: true, restart count 0 Aug 20 22:47:46.747: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.747: INFO: Container kindnet-cni ready: true, restart count 0 Aug 20 22:47:46.747: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 20 22:47:46.747: INFO: Container app ready: true, restart count 0 Aug 20 22:47:46.747: INFO: pod-configmaps-a9c8d566-3fe6-4921-9076-dad57bf41742 from configmap-6003 started at 2020-08-20 22:47:40 +0000 UTC (2 container statuses recorded) Aug 20 22:47:46.747: INFO: Container configmap-volume-binary-test ready: false, restart count 0 Aug 20 22:47:46.747: INFO: Container configmap-volume-data-test ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162d1b7d11767cc7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:47.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7172" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:47.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 22:47:48.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 22:47:50.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 22:47:52.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733560468, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 22:47:55.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:55.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3491" for this suite. STEP: Destroying namespace "webhook-3491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.047 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":6,"skipped":103,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:55.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-0c77ebeb-5d4b-44c7-af71-cef90966b810 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:47:55.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9444" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":7,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:47:55.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-736adc7e-6a96-4921-b4de-568b15ef5263 STEP: Creating a pod to test consume secrets Aug 20 22:47:56.024: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c" in namespace "projected-7024" to be "success or failure" Aug 20 22:47:56.027: INFO: Pod "pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434464ms Aug 20 22:47:58.031: INFO: Pod "pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007282002s Aug 20 22:48:00.035: INFO: Pod "pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010914036s STEP: Saw pod success Aug 20 22:48:00.035: INFO: Pod "pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c" satisfied condition "success or failure" Aug 20 22:48:00.037: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c container projected-secret-volume-test: STEP: delete the pod Aug 20 22:48:00.286: INFO: Waiting for pod pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c to disappear Aug 20 22:48:00.297: INFO: Pod pod-projected-secrets-b53da543-595f-4e4c-a714-84328bdb178c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:00.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7024" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":153,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:00.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-98d40592-1416-4f98-8591-1f1f5ce170b5 STEP: Creating a pod to test consume configMaps Aug 20 22:48:00.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c" in namespace "configmap-2999" to be "success or failure" Aug 20 22:48:00.451: INFO: Pod "pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.252794ms Aug 20 22:48:02.470: INFO: Pod "pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05820704s Aug 20 22:48:04.474: INFO: Pod "pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062832934s STEP: Saw pod success Aug 20 22:48:04.474: INFO: Pod "pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c" satisfied condition "success or failure" Aug 20 22:48:04.476: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c container configmap-volume-test: STEP: delete the pod Aug 20 22:48:04.508: INFO: Waiting for pod pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c to disappear Aug 20 22:48:04.513: INFO: Pod pod-configmaps-9685c300-1842-410b-aa68-ba4eac4c863c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:04.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2999" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:04.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-0e1e423d-c35c-40ed-8d87-81fa150c8026 STEP: Creating a pod to test consume secrets Aug 20 22:48:04.647: INFO: Waiting up to 5m0s for pod "pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43" in namespace "secrets-4581" to be "success or failure" Aug 20 22:48:04.651: INFO: Pod "pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233368ms Aug 20 22:48:06.655: INFO: Pod "pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007895773s Aug 20 22:48:08.658: INFO: Pod "pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01147579s STEP: Saw pod success Aug 20 22:48:08.658: INFO: Pod "pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43" satisfied condition "success or failure" Aug 20 22:48:08.661: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43 container secret-volume-test: STEP: delete the pod Aug 20 22:48:08.695: INFO: Waiting for pod pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43 to disappear Aug 20 22:48:08.705: INFO: Pod pod-secrets-4afe7c00-889b-4c11-816c-32ffba7b4a43 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:08.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4581" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:08.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d61b51a8-dcbc-40c8-b516-5607cbf2e0ff STEP: Creating a pod to test consume configMaps Aug 20 22:48:08.784: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f" in namespace "configmap-8184" to be "success or failure" Aug 20 22:48:08.788: INFO: Pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011648ms Aug 20 22:48:10.792: INFO: Pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008450176s Aug 20 22:48:12.796: INFO: Pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f": Phase="Running", Reason="", readiness=true. Elapsed: 4.012493627s Aug 20 22:48:14.823: INFO: Pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038853123s STEP: Saw pod success Aug 20 22:48:14.823: INFO: Pod "pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f" satisfied condition "success or failure" Aug 20 22:48:14.825: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f container configmap-volume-test: STEP: delete the pod Aug 20 22:48:14.844: INFO: Waiting for pod pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f to disappear Aug 20 22:48:14.849: INFO: Pod pod-configmaps-3f2e612a-8ca7-4e75-8695-8e296380858f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:14.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8184" for this suite. • [SLOW TEST:6.145 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:14.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 20 22:48:14.911: INFO: Waiting up to 5m0s for pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787" in namespace "emptydir-2837" to be "success or failure" Aug 20 22:48:14.915: INFO: Pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648996ms Aug 20 22:48:16.919: INFO: Pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007948875s Aug 20 22:48:18.991: INFO: Pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787": Phase="Running", Reason="", readiness=true. Elapsed: 4.079421402s Aug 20 22:48:20.994: INFO: Pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083004829s STEP: Saw pod success Aug 20 22:48:20.994: INFO: Pod "pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787" satisfied condition "success or failure" Aug 20 22:48:20.997: INFO: Trying to get logs from node jerma-worker pod pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787 container test-container: STEP: delete the pod Aug 20 22:48:21.014: INFO: Waiting for pod pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787 to disappear Aug 20 22:48:21.030: INFO: Pod pod-ee6b67f8-ffcc-43c3-8bc9-e8540f755787 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:21.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2837" for this suite. • [SLOW TEST:6.182 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:21.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 22:48:21.118: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 20 22:48:21.137: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:21.148: INFO: Number of nodes with available pods: 0 Aug 20 22:48:21.148: INFO: Node jerma-worker is running more than one daemon pod Aug 20 22:48:22.154: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:22.157: INFO: Number of nodes with available pods: 0 Aug 20 22:48:22.157: INFO: Node jerma-worker is running more than one daemon pod Aug 20 22:48:23.226: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:23.239: INFO: Number of nodes with available pods: 0 Aug 20 22:48:23.239: INFO: Node jerma-worker is running more than one daemon pod Aug 20 22:48:24.152: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:24.155: INFO: Number of nodes with available pods: 0 Aug 20 22:48:24.155: INFO: Node jerma-worker is running more than one daemon pod Aug 20 22:48:25.152: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:25.155: INFO: Number of nodes with available pods: 2 Aug 20 22:48:25.155: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 20 22:48:25.214: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:25.215: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:25.221: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:26.226: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:26.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:26.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:27.244: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:27.244: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:27.248: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:28.226: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:28.226: INFO: Pod daemon-set-hmqp9 is not available Aug 20 22:48:28.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:28.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:29.261: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:29.261: INFO: Pod daemon-set-hmqp9 is not available Aug 20 22:48:29.261: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:29.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:30.225: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:30.225: INFO: Pod daemon-set-hmqp9 is not available Aug 20 22:48:30.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:30.227: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:31.225: INFO: Wrong image for pod: daemon-set-hmqp9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:31.225: INFO: Pod daemon-set-hmqp9 is not available Aug 20 22:48:31.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:31.228: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:32.225: INFO: Pod daemon-set-kzh2l is not available Aug 20 22:48:32.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:32.229: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:33.226: INFO: Pod daemon-set-kzh2l is not available Aug 20 22:48:33.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:33.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:34.226: INFO: Pod daemon-set-kzh2l is not available Aug 20 22:48:34.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:34.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:35.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:35.229: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:36.290: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:36.290: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:36.306: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:37.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:37.225: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:37.228: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:38.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:38.226: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:38.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:39.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:39.226: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:39.231: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:40.225: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:40.225: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:40.229: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:41.226: INFO: Wrong image for pod: daemon-set-znvmx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 20 22:48:41.226: INFO: Pod daemon-set-znvmx is not available Aug 20 22:48:41.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:42.226: INFO: Pod daemon-set-flcv6 is not available Aug 20 22:48:42.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 20 22:48:42.234: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:42.237: INFO: Number of nodes with available pods: 1 Aug 20 22:48:42.237: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 22:48:43.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:43.246: INFO: Number of nodes with available pods: 1 Aug 20 22:48:43.246: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 22:48:44.249: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:44.252: INFO: Number of nodes with available pods: 1 Aug 20 22:48:44.252: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 22:48:45.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 22:48:45.246: INFO: Number of nodes with available pods: 2 Aug 20 22:48:45.246: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4705, will wait for the garbage collector to delete the pods Aug 20 22:48:45.320: INFO: Deleting DaemonSet.extensions daemon-set took: 6.387482ms Aug 20 22:48:45.421: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.200444ms Aug 20 22:48:48.424: INFO: Number of nodes with available pods: 0 Aug 20 22:48:48.424: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 22:48:48.426: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4705/daemonsets","resourceVersion":"1939213"},"items":null} Aug 20 22:48:48.428: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4705/pods","resourceVersion":"1939213"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:48.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4705" for this suite. • [SLOW TEST:27.404 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":13,"skipped":236,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:48.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-291674e6-3af3-42ff-85b3-0f7c10d728be STEP: Creating a pod to test consume configMaps Aug 20 22:48:48.541: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718" in namespace "configmap-732" to be "success or failure" Aug 20 22:48:48.544: INFO: Pod "pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699531ms Aug 20 22:48:50.575: INFO: Pod "pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033487785s Aug 20 22:48:52.579: INFO: Pod "pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037590962s STEP: Saw pod success Aug 20 22:48:52.579: INFO: Pod "pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718" satisfied condition "success or failure" Aug 20 22:48:52.582: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718 container configmap-volume-test: STEP: delete the pod Aug 20 22:48:52.711: INFO: Waiting for pod pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718 to disappear Aug 20 22:48:52.773: INFO: Pod pod-configmaps-ea3a744a-6207-4914-bae3-4bd90f3cb718 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:48:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-732" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":240,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:48:52.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 20 22:49:00.920: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:00.946: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:02.947: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:02.951: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:04.946: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:04.950: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:06.946: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:06.950: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:08.946: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:08.961: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:10.946: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:10.951: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 22:49:12.946: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 22:49:12.950: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:49:12.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8546" for this suite. • [SLOW TEST:20.196 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:49:12.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Aug 20 22:49:13.399: INFO: Waiting up to 5m0s for pod "var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7" in namespace "var-expansion-7717" to be "success or failure" Aug 20 22:49:13.430: INFO: Pod "var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.900556ms Aug 20 22:49:15.494: INFO: Pod "var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095111905s Aug 20 22:49:17.498: INFO: Pod "var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098930999s STEP: Saw pod success Aug 20 22:49:17.498: INFO: Pod "var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7" satisfied condition "success or failure" Aug 20 22:49:17.501: INFO: Trying to get logs from node jerma-worker pod var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7 container dapi-container: STEP: delete the pod Aug 20 22:49:17.570: INFO: Waiting for pod var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7 to disappear Aug 20 22:49:17.637: INFO: Pod var-expansion-caea2ff6-15f3-48f3-b252-d847328e71f7 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:49:17.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7717" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":289,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:49:17.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Aug 20 22:49:17.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 20 22:49:18.368: INFO: stderr: "" Aug 20 22:49:18.368: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:49:18.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":17,"skipped":299,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:49:18.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 20 22:49:19.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 20 22:49:19.126: INFO: Waiting for terminating namespaces to be deleted... Aug 20 22:49:19.128: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 20 22:49:19.132: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.132: INFO: Container kindnet-cni ready: true, restart count 0 Aug 20 22:49:19.132: INFO: rally-176b5648-9a8yjhxt-rgvbt from c-rally-176b5648-kh5hyzw5 started at 2020-08-20 22:49:08 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.132: INFO: Container rally-176b5648-9a8yjhxt ready: true, restart count 0 Aug 20 22:49:19.132: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.132: INFO: Container app ready: true, restart count 0 Aug 20 22:49:19.132: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.132: INFO: Container kube-proxy ready: true, restart count 0 Aug 20 22:49:19.132: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 20 22:49:19.185: INFO: pod-handle-http-request from container-lifecycle-hook-8546 started at 2020-08-20 22:48:52 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.185: INFO: Container pod-handle-http-request ready: true, restart count 0 Aug 20 22:49:19.185: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.185: INFO: Container kube-proxy ready: true, restart count 0 Aug 20 22:49:19.185: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.185: INFO: Container app ready: true, restart count 0 Aug 20 22:49:19.185: INFO: rally-176b5648-9a8yjhxt-nnzd5 from c-rally-176b5648-kh5hyzw5 started at 2020-08-20 22:49:08 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.185: INFO: Container rally-176b5648-9a8yjhxt ready: true, restart count 0 Aug 20 22:49:19.185: INFO: rally-176b5648-9a8yjhxt-g925s from c-rally-176b5648-kh5hyzw5 started at 2020-08-20 22:49:14 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.185: INFO: Container rally-176b5648-9a8yjhxt ready: false, restart count 0 Aug 20 22:49:19.185: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 20 22:49:19.186: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5c544c8e-1846-4509-9a04-22da3abdd7b1 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5c544c8e-1846-4509-9a04-22da3abdd7b1 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5c544c8e-1846-4509-9a04-22da3abdd7b1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:54:27.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5587" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:309.127 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":18,"skipped":312,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:54:27.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-r5w2p in namespace proxy-9338 I0820 22:54:27.643828 6 runners.go:189] Created replication controller with name: proxy-service-r5w2p, namespace: proxy-9338, replica count: 1 I0820 22:54:28.694269 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 22:54:29.694526 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 22:54:30.694734 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 22:54:31.694991 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 22:54:32.695218 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 22:54:33.695462 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 22:54:34.695700 6 runners.go:189] proxy-service-r5w2p Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 22:54:34.699: INFO: setup took 7.109518291s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 20 22:54:34.706: INFO: (0) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 7.046146ms) Aug 20 22:54:34.707: INFO: (0) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 7.931161ms) Aug 20 22:54:34.710: INFO: (0) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 10.841078ms) Aug 20 22:54:34.710: INFO: (0) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 11.506742ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 11.895573ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 11.985846ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 12.08088ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 12.045461ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 12.096268ms) Aug 20 22:54:34.711: INFO: (0) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 12.150297ms) Aug 20 22:54:34.712: INFO: (0) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 13.05924ms) Aug 20 22:54:34.717: INFO: (0) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 17.894139ms) Aug 20 22:54:34.717: INFO: (0) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 17.89963ms) Aug 20 22:54:34.717: INFO: (0) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 17.962826ms) Aug 20 22:54:34.717: INFO: (0) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 17.904094ms) Aug 20 22:54:34.718: INFO: (0) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 4.655282ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.613547ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.695386ms) Aug 20 22:54:34.722: INFO: (1) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.650904ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 5.018015ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 5.027313ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 5.045658ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 5.307296ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 5.334481ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 5.327241ms) Aug 20 22:54:34.723: INFO: (1) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 5.464288ms) Aug 20 22:54:34.724: INFO: (1) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 5.657133ms) Aug 20 22:54:34.724: INFO: (1) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 5.75472ms) Aug 20 22:54:34.724: INFO: (1) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 5.709718ms) Aug 20 22:54:34.727: INFO: (2) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.134012ms) Aug 20 22:54:34.727: INFO: (2) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.07396ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.080387ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 4.404558ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.721495ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 4.790827ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 4.794961ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 4.839174ms) Aug 20 22:54:34.728: INFO: (2) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 4.832746ms) Aug 20 22:54:34.729: INFO: (2) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 4.847594ms) Aug 20 22:54:34.729: INFO: (2) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 4.814098ms) Aug 20 22:54:34.729: INFO: (2) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test<... (200; 5.641494ms) Aug 20 22:54:34.735: INFO: (3) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 5.918191ms) Aug 20 22:54:34.735: INFO: (3) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 6.163772ms) Aug 20 22:54:34.736: INFO: (3) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 6.770504ms) Aug 20 22:54:34.737: INFO: (3) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 8.165421ms) Aug 20 22:54:34.738: INFO: (3) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 9.13433ms) Aug 20 22:54:34.745: INFO: (3) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 16.035974ms) Aug 20 22:54:34.745: INFO: (3) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 16.034857ms) Aug 20 22:54:34.745: INFO: (3) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 7.707922ms) Aug 20 22:54:34.753: INFO: (4) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 6.576696ms) Aug 20 22:54:34.753: INFO: (4) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 7.495204ms) Aug 20 22:54:34.753: INFO: (4) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 7.452467ms) Aug 20 22:54:34.753: INFO: (4) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 7.932252ms) Aug 20 22:54:34.753: INFO: (4) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 6.889967ms) Aug 20 22:54:34.769: INFO: (4) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 23.508878ms) Aug 20 22:54:34.769: INFO: (4) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 4.092576ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.130215ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.170535ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 4.092952ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 4.16329ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.181385ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.210045ms) Aug 20 22:54:34.778: INFO: (5) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 4.217867ms) Aug 20 22:54:34.785: INFO: (6) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 6.220376ms) Aug 20 22:54:34.785: INFO: (6) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 6.30294ms) Aug 20 22:54:34.785: INFO: (6) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 6.733212ms) Aug 20 22:54:34.785: INFO: (6) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 7.218653ms) Aug 20 22:54:34.785: INFO: (6) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 7.162195ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 7.247575ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 7.262338ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 7.291204ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 7.33967ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 7.374487ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 7.308981ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 7.748792ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 7.687474ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 7.77472ms) Aug 20 22:54:34.786: INFO: (6) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 7.813358ms) Aug 20 22:54:34.789: INFO: (7) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 2.693041ms) Aug 20 22:54:34.789: INFO: (7) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 2.794465ms) Aug 20 22:54:34.789: INFO: (7) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 2.685379ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.509462ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 3.505943ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 3.612329ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 3.636341ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 3.672014ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 3.777543ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 3.73938ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 3.911436ms) Aug 20 22:54:34.790: INFO: (7) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 4.042075ms) Aug 20 22:54:34.794: INFO: (8) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.465188ms) Aug 20 22:54:34.794: INFO: (8) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 3.682684ms) Aug 20 22:54:34.794: INFO: (8) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 4.041421ms) Aug 20 22:54:34.794: INFO: (8) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 4.111503ms) Aug 20 22:54:34.794: INFO: (8) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.139569ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.358119ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 4.335731ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.376267ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 4.608387ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.792425ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.834802ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 4.896031ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 4.853299ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 4.840121ms) Aug 20 22:54:34.795: INFO: (8) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 3.740112ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 3.666513ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 3.713727ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 3.701024ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.716156ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 3.684308ms) Aug 20 22:54:34.799: INFO: (9) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test<... (200; 4.842324ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.924514ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 5.07296ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 5.078686ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 5.143571ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 5.204545ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 5.174752ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 5.239523ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 5.33882ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 5.46001ms) Aug 20 22:54:34.805: INFO: (10) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 5.497759ms) Aug 20 22:54:34.807: INFO: (11) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 1.87037ms) Aug 20 22:54:34.807: INFO: (11) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 2.116688ms) Aug 20 22:54:34.808: INFO: (11) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 2.241466ms) Aug 20 22:54:34.808: INFO: (11) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 2.744405ms) Aug 20 22:54:34.808: INFO: (11) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 4.340582ms) Aug 20 22:54:34.810: INFO: (11) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.426506ms) Aug 20 22:54:34.810: INFO: (11) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.374641ms) Aug 20 22:54:34.810: INFO: (11) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 4.477131ms) Aug 20 22:54:34.810: INFO: (11) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.542975ms) Aug 20 22:54:34.811: INFO: (11) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 5.249329ms) Aug 20 22:54:34.811: INFO: (11) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 5.242754ms) Aug 20 22:54:34.811: INFO: (11) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 5.233058ms) Aug 20 22:54:34.811: INFO: (11) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 5.252237ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.54684ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.561992ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 3.541ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 3.509512ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 3.521715ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.636655ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test<... (200; 3.535678ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 3.573286ms) Aug 20 22:54:34.814: INFO: (12) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 3.605102ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 6.674837ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 6.648679ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 6.66808ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 6.566912ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 6.666454ms) Aug 20 22:54:34.817: INFO: (12) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 6.749433ms) Aug 20 22:54:34.820: INFO: (13) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 2.237712ms) Aug 20 22:54:34.820: INFO: (13) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 2.290207ms) Aug 20 22:54:34.822: INFO: (13) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.69225ms) Aug 20 22:54:34.822: INFO: (13) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.051535ms) Aug 20 22:54:34.822: INFO: (13) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 3.7666ms) Aug 20 22:54:34.822: INFO: (13) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: ... (200; 4.278656ms) Aug 20 22:54:34.822: INFO: (13) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 3.491122ms) Aug 20 22:54:34.823: INFO: (13) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 5.124374ms) Aug 20 22:54:34.823: INFO: (13) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 4.553025ms) Aug 20 22:54:34.823: INFO: (13) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 5.234446ms) Aug 20 22:54:34.823: INFO: (13) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 4.668225ms) Aug 20 22:54:34.823: INFO: (13) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 4.52362ms) Aug 20 22:54:34.826: INFO: (14) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.268514ms) Aug 20 22:54:34.827: INFO: (14) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 3.552047ms) Aug 20 22:54:34.827: INFO: (14) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 4.088427ms) Aug 20 22:54:34.827: INFO: (14) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.133315ms) Aug 20 22:54:34.827: INFO: (14) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.292239ms) Aug 20 22:54:34.829: INFO: (14) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 5.852572ms) Aug 20 22:54:34.829: INFO: (14) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test<... (200; 6.031771ms) Aug 20 22:54:34.829: INFO: (14) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 6.256755ms) Aug 20 22:54:34.830: INFO: (14) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 6.440128ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 5.841499ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 5.87847ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 5.87414ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: ... (200; 5.946901ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 5.909471ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 6.021761ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 6.107463ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 6.186442ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 6.312491ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 6.251884ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 6.305467ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 6.295526ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 6.321534ms) Aug 20 22:54:34.836: INFO: (15) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 6.301906ms) Aug 20 22:54:34.837: INFO: (15) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 7.019445ms) Aug 20 22:54:34.842: INFO: (16) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 5.568037ms) Aug 20 22:54:34.847: INFO: (16) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: ... (200; 11.25046ms) Aug 20 22:54:34.848: INFO: (16) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 11.270255ms) Aug 20 22:54:34.848: INFO: (16) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 11.283491ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 11.704421ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 11.949238ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 12.254901ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 12.23516ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 12.254452ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 12.239733ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 12.333306ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 12.354542ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 12.233127ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 12.44399ms) Aug 20 22:54:34.849: INFO: (16) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 12.366145ms) Aug 20 22:54:34.851: INFO: (17) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 1.7641ms) Aug 20 22:54:34.852: INFO: (17) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 2.30221ms) Aug 20 22:54:34.852: INFO: (17) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: test (200; 3.874815ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.101989ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 4.168899ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 4.125257ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 4.520423ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 4.458679ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 4.454612ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname1/proxy/: tls baz (200; 4.552106ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 4.515856ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 4.477057ms) Aug 20 22:54:34.854: INFO: (17) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.551076ms) Aug 20 22:54:34.857: INFO: (18) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.04601ms) Aug 20 22:54:34.857: INFO: (18) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 3.188044ms) Aug 20 22:54:34.858: INFO: (18) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname1/proxy/: foo (200; 3.509509ms) Aug 20 22:54:34.858: INFO: (18) /api/v1/namespaces/proxy-9338/services/http:proxy-service-r5w2p:portname2/proxy/: bar (200; 4.180445ms) Aug 20 22:54:34.858: INFO: (18) /api/v1/namespaces/proxy-9338/services/https:proxy-service-r5w2p:tlsportname2/proxy/: tls qux (200; 4.145059ms) Aug 20 22:54:34.858: INFO: (18) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname2/proxy/: bar (200; 4.199707ms) Aug 20 22:54:34.858: INFO: (18) /api/v1/namespaces/proxy-9338/services/proxy-service-r5w2p:portname1/proxy/: foo (200; 4.202414ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: ... (200; 4.6255ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:460/proxy/: tls baz (200; 4.670866ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 4.705397ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.604308ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:462/proxy/: tls qux (200; 4.723706ms) Aug 20 22:54:34.859: INFO: (18) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 4.865161ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:1080/proxy/: test<... (200; 3.823073ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f:162/proxy/: bar (200; 3.755152ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/proxy-service-r5w2p-hgn9f/proxy/: test (200; 3.795756ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:1080/proxy/: ... (200; 3.819739ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/http:proxy-service-r5w2p-hgn9f:160/proxy/: foo (200; 4.049972ms) Aug 20 22:54:34.863: INFO: (19) /api/v1/namespaces/proxy-9338/pods/https:proxy-service-r5w2p-hgn9f:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 20 22:54:42.895: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:54:42.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9007" for this suite. • [SLOW TEST:5.201 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":316,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:54:42.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-97264ce0-711e-4e6e-8837-36c1cd7aee32 in namespace container-probe-3599 Aug 20 22:54:47.135: INFO: Started pod busybox-97264ce0-711e-4e6e-8837-36c1cd7aee32 in namespace container-probe-3599 STEP: checking the pod's current state and verifying that restartCount is present Aug 20 22:54:47.138: INFO: Initial restart count of pod busybox-97264ce0-711e-4e6e-8837-36c1cd7aee32 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:58:47.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3599" for this suite. • [SLOW TEST:244.887 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":317,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:58:47.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 20 22:58:47.900: INFO: Waiting up to 5m0s for pod "downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9" in namespace "downward-api-3119" to be "success or failure" Aug 20 22:58:47.904: INFO: Pod "downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865358ms Aug 20 22:58:49.907: INFO: Pod "downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007185172s Aug 20 22:58:51.910: INFO: Pod "downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009732537s STEP: Saw pod success Aug 20 22:58:51.910: INFO: Pod "downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9" satisfied condition "success or failure" Aug 20 22:58:51.912: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9 container dapi-container: STEP: delete the pod Aug 20 22:58:51.941: INFO: Waiting for pod downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9 to disappear Aug 20 22:58:51.946: INFO: Pod downward-api-6bef085b-a807-43d3-9a8c-b0371f30d0e9 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:58:51.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3119" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":323,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:58:51.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8088 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8088 STEP: Deleting pre-stop pod Aug 20 22:59:05.095: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:05.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8088" for this suite. • [SLOW TEST:13.177 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":23,"skipped":336,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:05.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:05.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9717" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":24,"skipped":342,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:05.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 22:59:06.436: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 22:59:08.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561146, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561146, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 22:59:11.497: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:11.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1478" for this suite. STEP: Destroying namespace "webhook-1478-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.489 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":25,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:11.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 22:59:11.823: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-20 I0820 22:59:11.850220 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-20, replica count: 1 I0820 22:59:12.900661 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 22:59:13.900949 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 22:59:14.901212 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 22:59:15.901521 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 22:59:16.049: INFO: Created: latency-svc-pbpd5 Aug 20 22:59:16.073: INFO: Got endpoints: latency-svc-pbpd5 [71.571104ms] Aug 20 22:59:16.147: INFO: Created: latency-svc-kwqxq Aug 20 22:59:16.151: INFO: Got endpoints: latency-svc-kwqxq [78.549667ms] Aug 20 22:59:16.202: INFO: Created: latency-svc-br9kv Aug 20 22:59:16.495: INFO: Got endpoints: latency-svc-br9kv [421.661168ms] Aug 20 22:59:16.499: INFO: Created: latency-svc-v9hgh Aug 20 22:59:16.529: INFO: Got endpoints: latency-svc-v9hgh [455.761281ms] Aug 20 22:59:16.580: INFO: Created: latency-svc-gxmvz Aug 20 22:59:16.632: INFO: Got endpoints: latency-svc-gxmvz [558.808036ms] Aug 20 22:59:16.649: INFO: Created: latency-svc-h8c97 Aug 20 22:59:16.662: INFO: Got endpoints: latency-svc-h8c97 [589.414159ms] Aug 20 22:59:16.694: INFO: Created: latency-svc-984tt Aug 20 22:59:16.704: INFO: Got endpoints: latency-svc-984tt [630.770397ms] Aug 20 22:59:16.723: INFO: Created: latency-svc-sfg65 Aug 20 22:59:16.776: INFO: Got endpoints: latency-svc-sfg65 [703.422669ms] Aug 20 22:59:16.816: INFO: Created: latency-svc-24vxk Aug 20 22:59:16.828: INFO: Got endpoints: latency-svc-24vxk [754.917323ms] Aug 20 22:59:16.940: INFO: Created: latency-svc-8wjnb Aug 20 22:59:16.947: INFO: Got endpoints: latency-svc-8wjnb [873.527231ms] Aug 20 22:59:17.004: INFO: Created: latency-svc-h4z9c Aug 20 22:59:17.016: INFO: Got endpoints: latency-svc-h4z9c [943.161902ms] Aug 20 22:59:17.087: INFO: Created: latency-svc-5vjpz Aug 20 22:59:17.091: INFO: Got endpoints: latency-svc-5vjpz [1.017482645s] Aug 20 22:59:17.176: INFO: Created: latency-svc-z2rs9 Aug 20 22:59:17.243: INFO: Got endpoints: latency-svc-z2rs9 [1.170352364s] Aug 20 22:59:17.336: INFO: Created: latency-svc-k2mst Aug 20 22:59:17.347: INFO: Got endpoints: latency-svc-k2mst [1.273391633s] Aug 20 22:59:17.423: INFO: Created: latency-svc-vdbpn Aug 20 22:59:17.437: INFO: Got endpoints: latency-svc-vdbpn [1.36358015s] Aug 20 22:59:17.463: INFO: Created: latency-svc-hr9jj Aug 20 22:59:17.480: INFO: Got endpoints: latency-svc-hr9jj [1.407104249s] Aug 20 22:59:17.662: INFO: Created: latency-svc-5fqqf Aug 20 22:59:17.700: INFO: Got endpoints: latency-svc-5fqqf [1.548462425s] Aug 20 22:59:17.914: INFO: Created: latency-svc-s6wxl Aug 20 22:59:17.945: INFO: Got endpoints: latency-svc-s6wxl [1.450654274s] Aug 20 22:59:18.057: INFO: Created: latency-svc-tcj2c Aug 20 22:59:18.074: INFO: Got endpoints: latency-svc-tcj2c [1.544809495s] Aug 20 22:59:18.110: INFO: Created: latency-svc-mgcvq Aug 20 22:59:18.126: INFO: Got endpoints: latency-svc-mgcvq [1.493448747s] Aug 20 22:59:18.353: INFO: Created: latency-svc-xzd4c Aug 20 22:59:18.392: INFO: Got endpoints: latency-svc-xzd4c [1.729095943s] Aug 20 22:59:18.420: INFO: Created: latency-svc-69qw8 Aug 20 22:59:18.427: INFO: Got endpoints: latency-svc-69qw8 [1.723206306s] Aug 20 22:59:18.447: INFO: Created: latency-svc-tsjtv Aug 20 22:59:18.507: INFO: Got endpoints: latency-svc-tsjtv [1.730013596s] Aug 20 22:59:18.561: INFO: Created: latency-svc-r95kh Aug 20 22:59:18.656: INFO: Got endpoints: latency-svc-r95kh [1.827919021s] Aug 20 22:59:18.684: INFO: Created: latency-svc-mk4kd Aug 20 22:59:18.710: INFO: Got endpoints: latency-svc-mk4kd [1.763350948s] Aug 20 22:59:18.794: INFO: Created: latency-svc-dz7vq Aug 20 22:59:18.800: INFO: Got endpoints: latency-svc-dz7vq [1.783970988s] Aug 20 22:59:18.824: INFO: Created: latency-svc-bt28p Aug 20 22:59:18.837: INFO: Got endpoints: latency-svc-bt28p [1.745922326s] Aug 20 22:59:18.869: INFO: Created: latency-svc-djr4x Aug 20 22:59:18.885: INFO: Got endpoints: latency-svc-djr4x [1.64123311s] Aug 20 22:59:18.956: INFO: Created: latency-svc-7s4x5 Aug 20 22:59:18.963: INFO: Got endpoints: latency-svc-7s4x5 [1.615831737s] Aug 20 22:59:19.022: INFO: Created: latency-svc-2pdwl Aug 20 22:59:19.036: INFO: Got endpoints: latency-svc-2pdwl [1.598825756s] Aug 20 22:59:19.104: INFO: Created: latency-svc-mj7jv Aug 20 22:59:19.119: INFO: Got endpoints: latency-svc-mj7jv [1.638563412s] Aug 20 22:59:19.158: INFO: Created: latency-svc-86cjv Aug 20 22:59:19.186: INFO: Got endpoints: latency-svc-86cjv [1.485853656s] Aug 20 22:59:19.273: INFO: Created: latency-svc-t5dqz Aug 20 22:59:19.286: INFO: Got endpoints: latency-svc-t5dqz [1.340947647s] Aug 20 22:59:19.362: INFO: Created: latency-svc-ch6ft Aug 20 22:59:19.373: INFO: Got endpoints: latency-svc-ch6ft [1.298834227s] Aug 20 22:59:19.443: INFO: Created: latency-svc-lvmk9 Aug 20 22:59:19.456: INFO: Got endpoints: latency-svc-lvmk9 [1.330210553s] Aug 20 22:59:19.484: INFO: Created: latency-svc-9smvn Aug 20 22:59:19.498: INFO: Got endpoints: latency-svc-9smvn [1.106469006s] Aug 20 22:59:19.584: INFO: Created: latency-svc-xt72z Aug 20 22:59:19.594: INFO: Got endpoints: latency-svc-xt72z [1.1669911s] Aug 20 22:59:19.634: INFO: Created: latency-svc-rwbwg Aug 20 22:59:19.649: INFO: Got endpoints: latency-svc-rwbwg [1.142247827s] Aug 20 22:59:19.670: INFO: Created: latency-svc-q87cg Aug 20 22:59:19.679: INFO: Got endpoints: latency-svc-q87cg [1.022944704s] Aug 20 22:59:19.739: INFO: Created: latency-svc-kdfmg Aug 20 22:59:19.756: INFO: Got endpoints: latency-svc-kdfmg [1.045450896s] Aug 20 22:59:19.860: INFO: Created: latency-svc-mdqzq Aug 20 22:59:19.863: INFO: Got endpoints: latency-svc-mdqzq [1.062814591s] Aug 20 22:59:19.889: INFO: Created: latency-svc-dff6j Aug 20 22:59:19.900: INFO: Got endpoints: latency-svc-dff6j [1.062846977s] Aug 20 22:59:19.925: INFO: Created: latency-svc-tv8tr Aug 20 22:59:19.936: INFO: Got endpoints: latency-svc-tv8tr [1.051358066s] Aug 20 22:59:20.009: INFO: Created: latency-svc-7csdd Aug 20 22:59:20.061: INFO: Got endpoints: latency-svc-7csdd [1.097873248s] Aug 20 22:59:20.090: INFO: Created: latency-svc-642mp Aug 20 22:59:20.105: INFO: Got endpoints: latency-svc-642mp [1.069140774s] Aug 20 22:59:20.168: INFO: Created: latency-svc-9g4t5 Aug 20 22:59:20.170: INFO: Got endpoints: latency-svc-9g4t5 [1.051371347s] Aug 20 22:59:20.201: INFO: Created: latency-svc-dtb7w Aug 20 22:59:20.214: INFO: Got endpoints: latency-svc-dtb7w [1.02813234s] Aug 20 22:59:20.320: INFO: Created: latency-svc-mbpvx Aug 20 22:59:20.325: INFO: Got endpoints: latency-svc-mbpvx [1.038619018s] Aug 20 22:59:20.363: INFO: Created: latency-svc-48dq9 Aug 20 22:59:20.376: INFO: Got endpoints: latency-svc-48dq9 [1.003279358s] Aug 20 22:59:20.412: INFO: Created: latency-svc-6g7lg Aug 20 22:59:20.536: INFO: Got endpoints: latency-svc-6g7lg [1.080230882s] Aug 20 22:59:20.585: INFO: Created: latency-svc-ff6tk Aug 20 22:59:20.613: INFO: Got endpoints: latency-svc-ff6tk [1.114775089s] Aug 20 22:59:20.628: INFO: Created: latency-svc-5pzfj Aug 20 22:59:20.672: INFO: Got endpoints: latency-svc-5pzfj [1.077948904s] Aug 20 22:59:20.721: INFO: Created: latency-svc-ftxjz Aug 20 22:59:20.731: INFO: Got endpoints: latency-svc-ftxjz [1.081733547s] Aug 20 22:59:20.760: INFO: Created: latency-svc-rlv7t Aug 20 22:59:20.811: INFO: Got endpoints: latency-svc-rlv7t [1.132011382s] Aug 20 22:59:20.814: INFO: Created: latency-svc-z2k5d Aug 20 22:59:20.821: INFO: Got endpoints: latency-svc-z2k5d [1.065028207s] Aug 20 22:59:20.846: INFO: Created: latency-svc-g7fhg Aug 20 22:59:20.858: INFO: Got endpoints: latency-svc-g7fhg [994.391459ms] Aug 20 22:59:20.876: INFO: Created: latency-svc-xs4g8 Aug 20 22:59:20.889: INFO: Got endpoints: latency-svc-xs4g8 [989.120872ms] Aug 20 22:59:20.906: INFO: Created: latency-svc-ddtzs Aug 20 22:59:20.949: INFO: Got endpoints: latency-svc-ddtzs [1.012726885s] Aug 20 22:59:21.000: INFO: Created: latency-svc-tdp5p Aug 20 22:59:21.008: INFO: Got endpoints: latency-svc-tdp5p [947.390424ms] Aug 20 22:59:21.030: INFO: Created: latency-svc-bk87n Aug 20 22:59:21.039: INFO: Got endpoints: latency-svc-bk87n [933.91935ms] Aug 20 22:59:21.105: INFO: Created: latency-svc-9bkr5 Aug 20 22:59:21.108: INFO: Got endpoints: latency-svc-9bkr5 [937.92329ms] Aug 20 22:59:21.134: INFO: Created: latency-svc-g9qb9 Aug 20 22:59:21.154: INFO: Got endpoints: latency-svc-g9qb9 [939.961924ms] Aug 20 22:59:21.186: INFO: Created: latency-svc-w6g7x Aug 20 22:59:21.260: INFO: Got endpoints: latency-svc-w6g7x [935.291194ms] Aug 20 22:59:21.291: INFO: Created: latency-svc-hphlw Aug 20 22:59:21.310: INFO: Got endpoints: latency-svc-hphlw [933.75802ms] Aug 20 22:59:21.410: INFO: Created: latency-svc-8kp98 Aug 20 22:59:21.413: INFO: Got endpoints: latency-svc-8kp98 [877.218569ms] Aug 20 22:59:21.443: INFO: Created: latency-svc-g2q58 Aug 20 22:59:21.453: INFO: Got endpoints: latency-svc-g2q58 [840.563606ms] Aug 20 22:59:21.473: INFO: Created: latency-svc-b4khx Aug 20 22:59:21.484: INFO: Got endpoints: latency-svc-b4khx [812.155207ms] Aug 20 22:59:21.560: INFO: Created: latency-svc-qjpl8 Aug 20 22:59:21.563: INFO: Got endpoints: latency-svc-qjpl8 [832.5088ms] Aug 20 22:59:21.596: INFO: Created: latency-svc-tsp2j Aug 20 22:59:21.611: INFO: Got endpoints: latency-svc-tsp2j [799.378721ms] Aug 20 22:59:21.647: INFO: Created: latency-svc-z82k9 Aug 20 22:59:21.709: INFO: Got endpoints: latency-svc-z82k9 [888.492775ms] Aug 20 22:59:21.714: INFO: Created: latency-svc-xmcrv Aug 20 22:59:21.719: INFO: Got endpoints: latency-svc-xmcrv [861.320193ms] Aug 20 22:59:21.752: INFO: Created: latency-svc-6ck4s Aug 20 22:59:21.774: INFO: Got endpoints: latency-svc-6ck4s [885.08505ms] Aug 20 22:59:21.865: INFO: Created: latency-svc-xv97m Aug 20 22:59:21.869: INFO: Got endpoints: latency-svc-xv97m [919.786769ms] Aug 20 22:59:21.917: INFO: Created: latency-svc-sxl5s Aug 20 22:59:21.930: INFO: Got endpoints: latency-svc-sxl5s [921.473253ms] Aug 20 22:59:21.949: INFO: Created: latency-svc-6r6kf Aug 20 22:59:21.961: INFO: Got endpoints: latency-svc-6r6kf [921.745706ms] Aug 20 22:59:22.015: INFO: Created: latency-svc-wbfx6 Aug 20 22:59:22.026: INFO: Got endpoints: latency-svc-wbfx6 [917.341653ms] Aug 20 22:59:22.061: INFO: Created: latency-svc-prtmv Aug 20 22:59:22.075: INFO: Got endpoints: latency-svc-prtmv [920.7348ms] Aug 20 22:59:22.098: INFO: Created: latency-svc-tc7bh Aug 20 22:59:22.213: INFO: Got endpoints: latency-svc-tc7bh [952.073921ms] Aug 20 22:59:22.215: INFO: Created: latency-svc-w9d2g Aug 20 22:59:22.225: INFO: Got endpoints: latency-svc-w9d2g [915.249506ms] Aug 20 22:59:22.248: INFO: Created: latency-svc-89h5r Aug 20 22:59:22.261: INFO: Got endpoints: latency-svc-89h5r [848.094097ms] Aug 20 22:59:22.283: INFO: Created: latency-svc-wv2mx Aug 20 22:59:22.292: INFO: Got endpoints: latency-svc-wv2mx [838.058745ms] Aug 20 22:59:22.359: INFO: Created: latency-svc-7f7mh Aug 20 22:59:22.382: INFO: Got endpoints: latency-svc-7f7mh [897.272004ms] Aug 20 22:59:22.382: INFO: Created: latency-svc-47mfs Aug 20 22:59:22.406: INFO: Got endpoints: latency-svc-47mfs [842.578702ms] Aug 20 22:59:22.433: INFO: Created: latency-svc-fjhvv Aug 20 22:59:22.506: INFO: Got endpoints: latency-svc-fjhvv [895.52031ms] Aug 20 22:59:22.508: INFO: Created: latency-svc-h9zks Aug 20 22:59:22.529: INFO: Got endpoints: latency-svc-h9zks [819.690313ms] Aug 20 22:59:22.568: INFO: Created: latency-svc-ks4lf Aug 20 22:59:22.587: INFO: Got endpoints: latency-svc-ks4lf [868.147279ms] Aug 20 22:59:22.604: INFO: Created: latency-svc-zhjm2 Aug 20 22:59:22.637: INFO: Got endpoints: latency-svc-zhjm2 [863.269164ms] Aug 20 22:59:22.655: INFO: Created: latency-svc-zsblj Aug 20 22:59:22.678: INFO: Got endpoints: latency-svc-zsblj [809.165452ms] Aug 20 22:59:22.703: INFO: Created: latency-svc-ljrwj Aug 20 22:59:22.726: INFO: Got endpoints: latency-svc-ljrwj [796.462225ms] Aug 20 22:59:22.783: INFO: Created: latency-svc-m5sk2 Aug 20 22:59:22.785: INFO: Got endpoints: latency-svc-m5sk2 [824.761781ms] Aug 20 22:59:22.814: INFO: Created: latency-svc-8cgcm Aug 20 22:59:22.828: INFO: Got endpoints: latency-svc-8cgcm [802.591976ms] Aug 20 22:59:22.844: INFO: Created: latency-svc-xgm8s Aug 20 22:59:22.859: INFO: Got endpoints: latency-svc-xgm8s [783.721745ms] Aug 20 22:59:22.877: INFO: Created: latency-svc-bhfqd Aug 20 22:59:22.931: INFO: Got endpoints: latency-svc-bhfqd [718.738992ms] Aug 20 22:59:22.933: INFO: Created: latency-svc-nkvzv Aug 20 22:59:22.943: INFO: Got endpoints: latency-svc-nkvzv [718.108224ms] Aug 20 22:59:22.970: INFO: Created: latency-svc-vp77c Aug 20 22:59:22.985: INFO: Got endpoints: latency-svc-vp77c [723.586256ms] Aug 20 22:59:23.000: INFO: Created: latency-svc-zsz55 Aug 20 22:59:23.010: INFO: Got endpoints: latency-svc-zsz55 [718.496493ms] Aug 20 22:59:23.030: INFO: Created: latency-svc-xh9m4 Aug 20 22:59:23.093: INFO: Got endpoints: latency-svc-xh9m4 [710.830596ms] Aug 20 22:59:23.095: INFO: Created: latency-svc-s2xzb Aug 20 22:59:23.106: INFO: Got endpoints: latency-svc-s2xzb [700.641841ms] Aug 20 22:59:23.136: INFO: Created: latency-svc-rhfbr Aug 20 22:59:23.155: INFO: Got endpoints: latency-svc-rhfbr [648.771654ms] Aug 20 22:59:23.192: INFO: Created: latency-svc-vh9vg Aug 20 22:59:23.230: INFO: Got endpoints: latency-svc-vh9vg [701.061278ms] Aug 20 22:59:23.252: INFO: Created: latency-svc-csvqk Aug 20 22:59:23.263: INFO: Got endpoints: latency-svc-csvqk [675.795529ms] Aug 20 22:59:23.309: INFO: Created: latency-svc-v6z5t Aug 20 22:59:23.430: INFO: Got endpoints: latency-svc-v6z5t [792.446531ms] Aug 20 22:59:23.465: INFO: Created: latency-svc-9wrgh Aug 20 22:59:23.536: INFO: Got endpoints: latency-svc-9wrgh [857.641794ms] Aug 20 22:59:23.576: INFO: Created: latency-svc-kqfq5 Aug 20 22:59:23.588: INFO: Got endpoints: latency-svc-kqfq5 [861.87471ms] Aug 20 22:59:23.621: INFO: Created: latency-svc-xjnz7 Aug 20 22:59:23.662: INFO: Got endpoints: latency-svc-xjnz7 [876.239284ms] Aug 20 22:59:23.687: INFO: Created: latency-svc-kr5sk Aug 20 22:59:23.703: INFO: Got endpoints: latency-svc-kr5sk [874.112587ms] Aug 20 22:59:23.738: INFO: Created: latency-svc-8cq7d Aug 20 22:59:23.751: INFO: Got endpoints: latency-svc-8cq7d [892.337193ms] Aug 20 22:59:23.800: INFO: Created: latency-svc-5gd9z Aug 20 22:59:23.804: INFO: Got endpoints: latency-svc-5gd9z [872.755732ms] Aug 20 22:59:23.837: INFO: Created: latency-svc-4z8t4 Aug 20 22:59:23.853: INFO: Got endpoints: latency-svc-4z8t4 [910.191994ms] Aug 20 22:59:23.873: INFO: Created: latency-svc-c6cfh Aug 20 22:59:23.890: INFO: Got endpoints: latency-svc-c6cfh [904.774006ms] Aug 20 22:59:23.950: INFO: Created: latency-svc-5dp5f Aug 20 22:59:23.954: INFO: Got endpoints: latency-svc-5dp5f [943.332533ms] Aug 20 22:59:23.978: INFO: Created: latency-svc-bw6rg Aug 20 22:59:23.995: INFO: Got endpoints: latency-svc-bw6rg [902.220716ms] Aug 20 22:59:24.018: INFO: Created: latency-svc-r5x4l Aug 20 22:59:24.029: INFO: Got endpoints: latency-svc-r5x4l [921.992194ms] Aug 20 22:59:24.123: INFO: Created: latency-svc-945zp Aug 20 22:59:24.126: INFO: Got endpoints: latency-svc-945zp [970.539314ms] Aug 20 22:59:24.160: INFO: Created: latency-svc-4gzt8 Aug 20 22:59:24.173: INFO: Got endpoints: latency-svc-4gzt8 [942.721095ms] Aug 20 22:59:24.194: INFO: Created: latency-svc-tnzj4 Aug 20 22:59:24.203: INFO: Got endpoints: latency-svc-tnzj4 [940.201361ms] Aug 20 22:59:24.270: INFO: Created: latency-svc-28t8b Aug 20 22:59:24.294: INFO: Got endpoints: latency-svc-28t8b [864.319723ms] Aug 20 22:59:24.311: INFO: Created: latency-svc-nn7zj Aug 20 22:59:24.324: INFO: Got endpoints: latency-svc-nn7zj [787.876473ms] Aug 20 22:59:24.351: INFO: Created: latency-svc-jwgzx Aug 20 22:59:24.416: INFO: Got endpoints: latency-svc-jwgzx [827.9286ms] Aug 20 22:59:24.431: INFO: Created: latency-svc-bsk2f Aug 20 22:59:24.456: INFO: Got endpoints: latency-svc-bsk2f [794.706477ms] Aug 20 22:59:24.485: INFO: Created: latency-svc-xmnzk Aug 20 22:59:24.499: INFO: Got endpoints: latency-svc-xmnzk [796.28313ms] Aug 20 22:59:24.584: INFO: Created: latency-svc-6b7nn Aug 20 22:59:24.587: INFO: Got endpoints: latency-svc-6b7nn [835.901144ms] Aug 20 22:59:24.629: INFO: Created: latency-svc-9nrsb Aug 20 22:59:24.637: INFO: Got endpoints: latency-svc-9nrsb [832.795477ms] Aug 20 22:59:24.656: INFO: Created: latency-svc-dhr6h Aug 20 22:59:24.683: INFO: Got endpoints: latency-svc-dhr6h [829.975805ms] Aug 20 22:59:24.758: INFO: Created: latency-svc-hcflj Aug 20 22:59:24.773: INFO: Got endpoints: latency-svc-hcflj [882.926441ms] Aug 20 22:59:24.800: INFO: Created: latency-svc-sgmr5 Aug 20 22:59:24.812: INFO: Got endpoints: latency-svc-sgmr5 [858.345138ms] Aug 20 22:59:24.901: INFO: Created: latency-svc-hxm8d Aug 20 22:59:24.911: INFO: Got endpoints: latency-svc-hxm8d [915.666796ms] Aug 20 22:59:24.941: INFO: Created: latency-svc-nnqxj Aug 20 22:59:24.956: INFO: Got endpoints: latency-svc-nnqxj [927.571703ms] Aug 20 22:59:24.977: INFO: Created: latency-svc-zqmgq Aug 20 22:59:24.992: INFO: Got endpoints: latency-svc-zqmgq [866.526972ms] Aug 20 22:59:25.045: INFO: Created: latency-svc-d595s Aug 20 22:59:25.053: INFO: Got endpoints: latency-svc-d595s [879.985314ms] Aug 20 22:59:25.091: INFO: Created: latency-svc-6tc9s Aug 20 22:59:25.107: INFO: Got endpoints: latency-svc-6tc9s [903.846954ms] Aug 20 22:59:25.139: INFO: Created: latency-svc-g7vb7 Aug 20 22:59:25.192: INFO: Got endpoints: latency-svc-g7vb7 [897.408257ms] Aug 20 22:59:25.232: INFO: Created: latency-svc-d5f8l Aug 20 22:59:25.246: INFO: Got endpoints: latency-svc-d5f8l [921.770803ms] Aug 20 22:59:25.280: INFO: Created: latency-svc-dps6x Aug 20 22:59:25.350: INFO: Got endpoints: latency-svc-dps6x [933.697881ms] Aug 20 22:59:25.386: INFO: Created: latency-svc-6ds42 Aug 20 22:59:25.396: INFO: Got endpoints: latency-svc-6ds42 [939.965873ms] Aug 20 22:59:25.421: INFO: Created: latency-svc-6k9f5 Aug 20 22:59:25.433: INFO: Got endpoints: latency-svc-6k9f5 [933.693814ms] Aug 20 22:59:25.502: INFO: Created: latency-svc-k58f4 Aug 20 22:59:25.514: INFO: Got endpoints: latency-svc-k58f4 [927.122729ms] Aug 20 22:59:25.554: INFO: Created: latency-svc-phd6d Aug 20 22:59:25.577: INFO: Got endpoints: latency-svc-phd6d [940.209297ms] Aug 20 22:59:25.651: INFO: Created: latency-svc-6gnj4 Aug 20 22:59:25.654: INFO: Got endpoints: latency-svc-6gnj4 [970.822676ms] Aug 20 22:59:25.688: INFO: Created: latency-svc-rx5qw Aug 20 22:59:25.704: INFO: Got endpoints: latency-svc-rx5qw [930.526854ms] Aug 20 22:59:25.724: INFO: Created: latency-svc-pc7rk Aug 20 22:59:25.734: INFO: Got endpoints: latency-svc-pc7rk [921.505039ms] Aug 20 22:59:25.800: INFO: Created: latency-svc-6fx8k Aug 20 22:59:25.804: INFO: Got endpoints: latency-svc-6fx8k [892.79145ms] Aug 20 22:59:25.853: INFO: Created: latency-svc-dr2vj Aug 20 22:59:25.866: INFO: Got endpoints: latency-svc-dr2vj [909.858714ms] Aug 20 22:59:25.889: INFO: Created: latency-svc-n5wm6 Aug 20 22:59:25.949: INFO: Got endpoints: latency-svc-n5wm6 [957.250886ms] Aug 20 22:59:25.988: INFO: Created: latency-svc-dpllx Aug 20 22:59:26.012: INFO: Got endpoints: latency-svc-dpllx [958.747943ms] Aug 20 22:59:26.042: INFO: Created: latency-svc-kv6wt Aug 20 22:59:26.105: INFO: Got endpoints: latency-svc-kv6wt [997.207337ms] Aug 20 22:59:26.129: INFO: Created: latency-svc-6w5jj Aug 20 22:59:26.143: INFO: Got endpoints: latency-svc-6w5jj [951.634157ms] Aug 20 22:59:26.167: INFO: Created: latency-svc-wth9d Aug 20 22:59:26.179: INFO: Got endpoints: latency-svc-wth9d [933.895826ms] Aug 20 22:59:26.203: INFO: Created: latency-svc-hq2fd Aug 20 22:59:26.249: INFO: Got endpoints: latency-svc-hq2fd [898.477057ms] Aug 20 22:59:26.276: INFO: Created: latency-svc-6h8bg Aug 20 22:59:26.288: INFO: Got endpoints: latency-svc-6h8bg [891.325434ms] Aug 20 22:59:26.315: INFO: Created: latency-svc-8gchf Aug 20 22:59:26.330: INFO: Got endpoints: latency-svc-8gchf [897.529792ms] Aug 20 22:59:26.392: INFO: Created: latency-svc-qwx2q Aug 20 22:59:26.403: INFO: Got endpoints: latency-svc-qwx2q [888.913085ms] Aug 20 22:59:26.462: INFO: Created: latency-svc-ks874 Aug 20 22:59:26.487: INFO: Got endpoints: latency-svc-ks874 [909.454684ms] Aug 20 22:59:26.559: INFO: Created: latency-svc-6xh7w Aug 20 22:59:26.565: INFO: Got endpoints: latency-svc-6xh7w [910.174844ms] Aug 20 22:59:26.603: INFO: Created: latency-svc-8p7b2 Aug 20 22:59:26.631: INFO: Got endpoints: latency-svc-8p7b2 [927.631434ms] Aug 20 22:59:26.686: INFO: Created: latency-svc-h5s4q Aug 20 22:59:26.719: INFO: Created: latency-svc-6ttz6 Aug 20 22:59:26.720: INFO: Got endpoints: latency-svc-h5s4q [986.159648ms] Aug 20 22:59:26.773: INFO: Got endpoints: latency-svc-6ttz6 [969.748738ms] Aug 20 22:59:26.830: INFO: Created: latency-svc-cfc2k Aug 20 22:59:26.854: INFO: Got endpoints: latency-svc-cfc2k [988.053847ms] Aug 20 22:59:26.894: INFO: Created: latency-svc-ngxp8 Aug 20 22:59:26.956: INFO: Got endpoints: latency-svc-ngxp8 [1.006174984s] Aug 20 22:59:26.984: INFO: Created: latency-svc-txkrb Aug 20 22:59:26.992: INFO: Got endpoints: latency-svc-txkrb [979.972582ms] Aug 20 22:59:27.017: INFO: Created: latency-svc-zw6cm Aug 20 22:59:27.041: INFO: Got endpoints: latency-svc-zw6cm [935.958143ms] Aug 20 22:59:27.105: INFO: Created: latency-svc-hhms4 Aug 20 22:59:27.113: INFO: Got endpoints: latency-svc-hhms4 [969.834493ms] Aug 20 22:59:27.242: INFO: Created: latency-svc-gkvbs Aug 20 22:59:27.246: INFO: Got endpoints: latency-svc-gkvbs [1.066216184s] Aug 20 22:59:27.320: INFO: Created: latency-svc-mgmtq Aug 20 22:59:27.341: INFO: Got endpoints: latency-svc-mgmtq [1.092720945s] Aug 20 22:59:27.427: INFO: Created: latency-svc-jr4m7 Aug 20 22:59:27.431: INFO: Got endpoints: latency-svc-jr4m7 [1.143032471s] Aug 20 22:59:27.510: INFO: Created: latency-svc-h45wf Aug 20 22:59:27.522: INFO: Got endpoints: latency-svc-h45wf [1.191228451s] Aug 20 22:59:27.625: INFO: Created: latency-svc-mlkmm Aug 20 22:59:27.635: INFO: Got endpoints: latency-svc-mlkmm [1.232351401s] Aug 20 22:59:27.670: INFO: Created: latency-svc-fjx6t Aug 20 22:59:27.740: INFO: Got endpoints: latency-svc-fjx6t [1.252740256s] Aug 20 22:59:27.742: INFO: Created: latency-svc-st5ks Aug 20 22:59:27.750: INFO: Got endpoints: latency-svc-st5ks [1.185070183s] Aug 20 22:59:27.782: INFO: Created: latency-svc-rk5pp Aug 20 22:59:27.792: INFO: Got endpoints: latency-svc-rk5pp [1.160750993s] Aug 20 22:59:27.811: INFO: Created: latency-svc-q9xn5 Aug 20 22:59:27.823: INFO: Got endpoints: latency-svc-q9xn5 [1.103434257s] Aug 20 22:59:27.879: INFO: Created: latency-svc-lcmxr Aug 20 22:59:27.904: INFO: Got endpoints: latency-svc-lcmxr [1.130863593s] Aug 20 22:59:27.905: INFO: Created: latency-svc-zwvvw Aug 20 22:59:27.919: INFO: Got endpoints: latency-svc-zwvvw [1.064945622s] Aug 20 22:59:27.946: INFO: Created: latency-svc-n2gcw Aug 20 22:59:27.962: INFO: Got endpoints: latency-svc-n2gcw [1.005857722s] Aug 20 22:59:28.024: INFO: Created: latency-svc-kw5wc Aug 20 22:59:28.025: INFO: Got endpoints: latency-svc-kw5wc [1.033047712s] Aug 20 22:59:28.063: INFO: Created: latency-svc-hvslx Aug 20 22:59:28.088: INFO: Got endpoints: latency-svc-hvslx [1.047377988s] Aug 20 22:59:28.111: INFO: Created: latency-svc-mng6d Aug 20 22:59:28.158: INFO: Got endpoints: latency-svc-mng6d [1.045121955s] Aug 20 22:59:28.187: INFO: Created: latency-svc-xxdj6 Aug 20 22:59:28.222: INFO: Got endpoints: latency-svc-xxdj6 [976.183796ms] Aug 20 22:59:28.253: INFO: Created: latency-svc-rmtr8 Aug 20 22:59:28.296: INFO: Got endpoints: latency-svc-rmtr8 [954.996472ms] Aug 20 22:59:28.314: INFO: Created: latency-svc-mw4mh Aug 20 22:59:28.323: INFO: Got endpoints: latency-svc-mw4mh [892.301643ms] Aug 20 22:59:28.339: INFO: Created: latency-svc-mj6x5 Aug 20 22:59:28.347: INFO: Got endpoints: latency-svc-mj6x5 [825.57025ms] Aug 20 22:59:28.394: INFO: Created: latency-svc-nccq9 Aug 20 22:59:28.470: INFO: Got endpoints: latency-svc-nccq9 [834.586005ms] Aug 20 22:59:28.472: INFO: Created: latency-svc-2z68z Aug 20 22:59:28.479: INFO: Got endpoints: latency-svc-2z68z [739.912395ms] Aug 20 22:59:28.501: INFO: Created: latency-svc-zpc8m Aug 20 22:59:28.543: INFO: Got endpoints: latency-svc-zpc8m [793.567591ms] Aug 20 22:59:28.644: INFO: Created: latency-svc-qvw8b Aug 20 22:59:28.666: INFO: Got endpoints: latency-svc-qvw8b [874.028308ms] Aug 20 22:59:28.667: INFO: Created: latency-svc-kc4wl Aug 20 22:59:28.690: INFO: Got endpoints: latency-svc-kc4wl [867.008507ms] Aug 20 22:59:28.717: INFO: Created: latency-svc-mrb8q Aug 20 22:59:28.733: INFO: Got endpoints: latency-svc-mrb8q [828.684686ms] Aug 20 22:59:28.787: INFO: Created: latency-svc-s94wm Aug 20 22:59:28.799: INFO: Got endpoints: latency-svc-s94wm [879.789275ms] Aug 20 22:59:28.816: INFO: Created: latency-svc-dhjk2 Aug 20 22:59:28.829: INFO: Got endpoints: latency-svc-dhjk2 [867.884166ms] Aug 20 22:59:28.846: INFO: Created: latency-svc-v5cs2 Aug 20 22:59:28.860: INFO: Got endpoints: latency-svc-v5cs2 [835.033992ms] Aug 20 22:59:28.876: INFO: Created: latency-svc-6dq5m Aug 20 22:59:28.943: INFO: Got endpoints: latency-svc-6dq5m [855.19335ms] Aug 20 22:59:28.945: INFO: Created: latency-svc-qjxh6 Aug 20 22:59:28.957: INFO: Got endpoints: latency-svc-qjxh6 [798.448118ms] Aug 20 22:59:28.976: INFO: Created: latency-svc-6vv48 Aug 20 22:59:28.986: INFO: Got endpoints: latency-svc-6vv48 [764.209498ms] Aug 20 22:59:29.005: INFO: Created: latency-svc-g74bh Aug 20 22:59:29.017: INFO: Got endpoints: latency-svc-g74bh [720.190943ms] Aug 20 22:59:29.039: INFO: Created: latency-svc-bhxc8 Aug 20 22:59:29.105: INFO: Got endpoints: latency-svc-bhxc8 [781.970704ms] Aug 20 22:59:29.108: INFO: Created: latency-svc-pzp4n Aug 20 22:59:29.119: INFO: Got endpoints: latency-svc-pzp4n [771.981732ms] Aug 20 22:59:29.140: INFO: Created: latency-svc-szbpv Aug 20 22:59:29.168: INFO: Got endpoints: latency-svc-szbpv [697.722445ms] Aug 20 22:59:29.261: INFO: Created: latency-svc-gxrfl Aug 20 22:59:29.269: INFO: Got endpoints: latency-svc-gxrfl [789.272942ms] Aug 20 22:59:29.332: INFO: Created: latency-svc-pfmxh Aug 20 22:59:29.348: INFO: Got endpoints: latency-svc-pfmxh [804.98859ms] Aug 20 22:59:29.410: INFO: Created: latency-svc-7xq8b Aug 20 22:59:29.427: INFO: Got endpoints: latency-svc-7xq8b [760.401304ms] Aug 20 22:59:29.449: INFO: Created: latency-svc-hh6gw Aug 20 22:59:29.463: INFO: Got endpoints: latency-svc-hh6gw [772.225724ms] Aug 20 22:59:29.463: INFO: Latencies: [78.549667ms 421.661168ms 455.761281ms 558.808036ms 589.414159ms 630.770397ms 648.771654ms 675.795529ms 697.722445ms 700.641841ms 701.061278ms 703.422669ms 710.830596ms 718.108224ms 718.496493ms 718.738992ms 720.190943ms 723.586256ms 739.912395ms 754.917323ms 760.401304ms 764.209498ms 771.981732ms 772.225724ms 781.970704ms 783.721745ms 787.876473ms 789.272942ms 792.446531ms 793.567591ms 794.706477ms 796.28313ms 796.462225ms 798.448118ms 799.378721ms 802.591976ms 804.98859ms 809.165452ms 812.155207ms 819.690313ms 824.761781ms 825.57025ms 827.9286ms 828.684686ms 829.975805ms 832.5088ms 832.795477ms 834.586005ms 835.033992ms 835.901144ms 838.058745ms 840.563606ms 842.578702ms 848.094097ms 855.19335ms 857.641794ms 858.345138ms 861.320193ms 861.87471ms 863.269164ms 864.319723ms 866.526972ms 867.008507ms 867.884166ms 868.147279ms 872.755732ms 873.527231ms 874.028308ms 874.112587ms 876.239284ms 877.218569ms 879.789275ms 879.985314ms 882.926441ms 885.08505ms 888.492775ms 888.913085ms 891.325434ms 892.301643ms 892.337193ms 892.79145ms 895.52031ms 897.272004ms 897.408257ms 897.529792ms 898.477057ms 902.220716ms 903.846954ms 904.774006ms 909.454684ms 909.858714ms 910.174844ms 910.191994ms 915.249506ms 915.666796ms 917.341653ms 919.786769ms 920.7348ms 921.473253ms 921.505039ms 921.745706ms 921.770803ms 921.992194ms 927.122729ms 927.571703ms 927.631434ms 930.526854ms 933.693814ms 933.697881ms 933.75802ms 933.895826ms 933.91935ms 935.291194ms 935.958143ms 937.92329ms 939.961924ms 939.965873ms 940.201361ms 940.209297ms 942.721095ms 943.161902ms 943.332533ms 947.390424ms 951.634157ms 952.073921ms 954.996472ms 957.250886ms 958.747943ms 969.748738ms 969.834493ms 970.539314ms 970.822676ms 976.183796ms 979.972582ms 986.159648ms 988.053847ms 989.120872ms 994.391459ms 997.207337ms 1.003279358s 1.005857722s 1.006174984s 1.012726885s 1.017482645s 1.022944704s 1.02813234s 1.033047712s 1.038619018s 1.045121955s 1.045450896s 1.047377988s 1.051358066s 1.051371347s 1.062814591s 1.062846977s 1.064945622s 1.065028207s 1.066216184s 1.069140774s 1.077948904s 1.080230882s 1.081733547s 1.092720945s 1.097873248s 1.103434257s 1.106469006s 1.114775089s 1.130863593s 1.132011382s 1.142247827s 1.143032471s 1.160750993s 1.1669911s 1.170352364s 1.185070183s 1.191228451s 1.232351401s 1.252740256s 1.273391633s 1.298834227s 1.330210553s 1.340947647s 1.36358015s 1.407104249s 1.450654274s 1.485853656s 1.493448747s 1.544809495s 1.548462425s 1.598825756s 1.615831737s 1.638563412s 1.64123311s 1.723206306s 1.729095943s 1.730013596s 1.745922326s 1.763350948s 1.783970988s 1.827919021s] Aug 20 22:59:29.463: INFO: 50 %ile: 921.745706ms Aug 20 22:59:29.463: INFO: 90 %ile: 1.330210553s Aug 20 22:59:29.463: INFO: 99 %ile: 1.783970988s Aug 20 22:59:29.463: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:29.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-20" for this suite. • [SLOW TEST:17.740 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":26,"skipped":390,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:29.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3001" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":27,"skipped":391,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:29.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 20 22:59:34.065: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:34.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7965" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":392,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:34.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Aug 20 22:59:34.217: INFO: Waiting up to 5m0s for pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c" in namespace "var-expansion-3457" to be "success or failure" Aug 20 22:59:34.246: INFO: Pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.888176ms Aug 20 22:59:36.381: INFO: Pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164188565s Aug 20 22:59:38.398: INFO: Pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181427275s Aug 20 22:59:40.422: INFO: Pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205741115s STEP: Saw pod success Aug 20 22:59:40.423: INFO: Pod "var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c" satisfied condition "success or failure" Aug 20 22:59:40.431: INFO: Trying to get logs from node jerma-worker pod var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c container dapi-container: STEP: delete the pod Aug 20 22:59:40.618: INFO: Waiting for pod var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c to disappear Aug 20 22:59:40.629: INFO: Pod var-expansion-cf3d7a98-6ef7-4977-9b21-928bfbe1f29c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 22:59:40.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3457" for this suite. • [SLOW TEST:6.604 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 22:59:40.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-nv78 STEP: Creating a pod to test atomic-volume-subpath Aug 20 22:59:40.917: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nv78" in namespace "subpath-2513" to be "success or failure" Aug 20 22:59:40.939: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Pending", Reason="", readiness=false. Elapsed: 21.33663ms Aug 20 22:59:42.962: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04456917s Aug 20 22:59:45.015: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 4.097453146s Aug 20 22:59:47.093: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 6.175989116s Aug 20 22:59:49.105: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 8.187125843s Aug 20 22:59:51.111: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 10.193416976s Aug 20 22:59:53.136: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 12.218503108s Aug 20 22:59:55.150: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 14.232470887s Aug 20 22:59:57.189: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 16.271399982s Aug 20 22:59:59.199: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 18.281058364s Aug 20 23:00:01.208: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 20.29005081s Aug 20 23:00:03.225: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 22.307578267s Aug 20 23:00:05.229: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Running", Reason="", readiness=true. Elapsed: 24.311969472s Aug 20 23:00:07.234: INFO: Pod "pod-subpath-test-projected-nv78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.316275999s STEP: Saw pod success Aug 20 23:00:07.234: INFO: Pod "pod-subpath-test-projected-nv78" satisfied condition "success or failure" Aug 20 23:00:07.237: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-nv78 container test-container-subpath-projected-nv78: STEP: delete the pod Aug 20 23:00:07.274: INFO: Waiting for pod pod-subpath-test-projected-nv78 to disappear Aug 20 23:00:07.285: INFO: Pod pod-subpath-test-projected-nv78 no longer exists STEP: Deleting pod pod-subpath-test-projected-nv78 Aug 20 23:00:07.285: INFO: Deleting pod "pod-subpath-test-projected-nv78" in namespace "subpath-2513" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:07.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2513" for this suite. • [SLOW TEST:26.603 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":30,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:07.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-6fe34fa5-40af-41fb-b818-70393e8e7d68 STEP: Creating a pod to test consume configMaps Aug 20 23:00:07.437: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7" in namespace "projected-5826" to be "success or failure" Aug 20 23:00:07.455: INFO: Pod "pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.644206ms Aug 20 23:00:09.459: INFO: Pod "pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022037649s Aug 20 23:00:11.464: INFO: Pod "pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026507258s STEP: Saw pod success Aug 20 23:00:11.464: INFO: Pod "pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7" satisfied condition "success or failure" Aug 20 23:00:11.467: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7 container projected-configmap-volume-test: STEP: delete the pod Aug 20 23:00:11.598: INFO: Waiting for pod pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7 to disappear Aug 20 23:00:11.615: INFO: Pod pod-projected-configmaps-0b33713d-9916-43da-8df9-19e779eee8e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:11.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5826" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":458,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:11.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:00:11.814: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:15.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-983" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":469,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:15.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2610 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2610 STEP: creating replication controller externalsvc in namespace services-2610 I0820 23:00:16.490803 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2610, replica count: 2 I0820 23:00:19.541254 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 23:00:22.541484 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 20 23:00:22.573: INFO: Creating new exec pod Aug 20 23:00:26.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2610 execpodxrx76 -- /bin/sh -x -c nslookup clusterip-service' Aug 20 23:00:30.050: INFO: stderr: "I0820 23:00:29.929413 55 log.go:172] (0xc0000f6fd0) (0xc000633e00) Create stream\nI0820 23:00:29.929449 55 log.go:172] (0xc0000f6fd0) (0xc000633e00) Stream added, broadcasting: 1\nI0820 23:00:29.932220 55 log.go:172] (0xc0000f6fd0) Reply frame received for 1\nI0820 23:00:29.932294 55 log.go:172] (0xc0000f6fd0) (0xc000633ea0) Create stream\nI0820 23:00:29.932318 55 log.go:172] (0xc0000f6fd0) (0xc000633ea0) Stream added, broadcasting: 3\nI0820 23:00:29.934512 55 log.go:172] (0xc0000f6fd0) Reply frame received for 3\nI0820 23:00:29.934547 55 log.go:172] (0xc0000f6fd0) (0xc0005ae5a0) Create stream\nI0820 23:00:29.934556 55 log.go:172] (0xc0000f6fd0) (0xc0005ae5a0) Stream added, broadcasting: 5\nI0820 23:00:29.935737 55 log.go:172] (0xc0000f6fd0) Reply frame received for 5\nI0820 23:00:30.031637 55 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0820 23:00:30.031680 55 log.go:172] (0xc0005ae5a0) (5) Data frame handling\nI0820 23:00:30.031708 55 log.go:172] (0xc0005ae5a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0820 23:00:30.038249 55 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0820 23:00:30.038272 55 log.go:172] (0xc000633ea0) (3) Data frame handling\nI0820 23:00:30.038286 55 log.go:172] (0xc000633ea0) (3) Data frame sent\nI0820 23:00:30.039294 55 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0820 23:00:30.039312 55 log.go:172] (0xc000633ea0) (3) Data frame handling\nI0820 23:00:30.039328 55 log.go:172] (0xc000633ea0) (3) Data frame sent\nI0820 23:00:30.040024 55 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0820 23:00:30.040039 55 log.go:172] (0xc000633ea0) (3) Data frame handling\nI0820 23:00:30.040104 55 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0820 23:00:30.040126 55 log.go:172] (0xc0005ae5a0) (5) Data frame handling\nI0820 23:00:30.042165 55 log.go:172] (0xc0000f6fd0) Data frame received for 1\nI0820 23:00:30.042279 55 log.go:172] (0xc000633e00) (1) Data frame handling\nI0820 23:00:30.042357 55 log.go:172] (0xc000633e00) (1) Data frame sent\nI0820 23:00:30.042424 55 log.go:172] (0xc0000f6fd0) (0xc000633e00) Stream removed, broadcasting: 1\nI0820 23:00:30.042457 55 log.go:172] (0xc0000f6fd0) Go away received\nI0820 23:00:30.042804 55 log.go:172] (0xc0000f6fd0) (0xc000633e00) Stream removed, broadcasting: 1\nI0820 23:00:30.042825 55 log.go:172] (0xc0000f6fd0) (0xc000633ea0) Stream removed, broadcasting: 3\nI0820 23:00:30.042833 55 log.go:172] (0xc0000f6fd0) (0xc0005ae5a0) Stream removed, broadcasting: 5\n" Aug 20 23:00:30.050: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2610.svc.cluster.local\tcanonical name = externalsvc.services-2610.svc.cluster.local.\nName:\texternalsvc.services-2610.svc.cluster.local\nAddress: 10.105.0.121\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2610, will wait for the garbage collector to delete the pods Aug 20 23:00:30.112: INFO: Deleting ReplicationController externalsvc took: 6.607353ms Aug 20 23:00:30.212: INFO: Terminating ReplicationController externalsvc pods took: 100.215819ms Aug 20 23:00:41.838: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:41.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2610" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.935 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":33,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:41.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 20 23:00:41.996: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944437 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 23:00:41.997: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944438 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 20 23:00:41.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944439 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 20 23:00:52.125: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944504 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 23:00:52.125: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944505 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 20 23:00:52.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9964 /api/v1/namespaces/watch-9964/configmaps/e2e-watch-test-label-changed 43e09b93-48f1-4d5e-ad92-ac798be47060 1944506 0 2020-08-20 23:00:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:52.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9964" for this suite. • [SLOW TEST:10.219 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":34,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:52.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 20 23:00:52.354: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 20 23:00:57.391: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:00:58.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9037" for this suite. • [SLOW TEST:6.339 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":35,"skipped":583,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:00:58.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 20 23:00:59.013: INFO: Waiting up to 5m0s for pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c" in namespace "downward-api-3145" to be "success or failure" Aug 20 23:00:59.330: INFO: Pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c": Phase="Pending", Reason="", readiness=false. Elapsed: 316.996528ms Aug 20 23:01:01.333: INFO: Pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31960697s Aug 20 23:01:03.337: INFO: Pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323765123s Aug 20 23:01:05.341: INFO: Pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.327911846s STEP: Saw pod success Aug 20 23:01:05.341: INFO: Pod "downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c" satisfied condition "success or failure" Aug 20 23:01:05.344: INFO: Trying to get logs from node jerma-worker pod downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c container dapi-container: STEP: delete the pod Aug 20 23:01:05.379: INFO: Waiting for pod downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c to disappear Aug 20 23:01:05.405: INFO: Pod downward-api-8f056e04-616c-4000-a6bc-61717c3dbf3c no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:05.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3145" for this suite. • [SLOW TEST:6.948 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":591,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:05.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:01:05.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a" in namespace "downward-api-3715" to be "success or failure" Aug 20 23:01:05.479: INFO: Pod "downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521427ms Aug 20 23:01:07.499: INFO: Pod "downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022947786s Aug 20 23:01:09.503: INFO: Pod "downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027059872s STEP: Saw pod success Aug 20 23:01:09.503: INFO: Pod "downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a" satisfied condition "success or failure" Aug 20 23:01:09.506: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a container client-container: STEP: delete the pod Aug 20 23:01:09.524: INFO: Waiting for pod downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a to disappear Aug 20 23:01:09.588: INFO: Pod downwardapi-volume-444f6107-8b55-49e9-b61e-cfc659bbe95a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:09.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3715" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":600,"failed":0} ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:09.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-265099ef-f9c5-4278-a37d-938e59d8925c STEP: Creating secret with name secret-projected-all-test-volume-a259c43b-fe2b-4cf3-9eb4-de06d693fe6f STEP: Creating a pod to test Check all projections for projected volume plugin Aug 20 23:01:09.674: INFO: Waiting up to 5m0s for pod "projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead" in namespace "projected-9736" to be "success or failure" Aug 20 23:01:09.714: INFO: Pod "projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead": Phase="Pending", Reason="", readiness=false. Elapsed: 40.336817ms Aug 20 23:01:11.718: INFO: Pod "projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044279632s Aug 20 23:01:13.843: INFO: Pod "projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168545927s STEP: Saw pod success Aug 20 23:01:13.843: INFO: Pod "projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead" satisfied condition "success or failure" Aug 20 23:01:13.847: INFO: Trying to get logs from node jerma-worker pod projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead container projected-all-volume-test: STEP: delete the pod Aug 20 23:01:13.865: INFO: Waiting for pod projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead to disappear Aug 20 23:01:13.870: INFO: Pod projected-volume-73162ad6-5e92-4d36-af76-1424f38f5ead no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:13.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9736" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":38,"skipped":600,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:13.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:01:13.925: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:14.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5333" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":39,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:14.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 20 23:01:19.327: INFO: Successfully updated pod "labelsupdate63a0df0f-b483-4d5f-8c80-9740a1d107f4" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:21.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2002" for this suite. • [SLOW TEST:6.835 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:21.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:01:22.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:01:24.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 23:01:26.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561282, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:01:29.520: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:01:29.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-847-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:31.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8288" for this suite. STEP: Destroying namespace "webhook-8288-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.753 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":41,"skipped":716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0aef056a-9bf8-47e5-9fcd-a8456e230024 STEP: Creating a pod to test consume secrets Aug 20 23:01:31.214: INFO: Waiting up to 5m0s for pod "pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54" in namespace "secrets-7828" to be "success or failure" Aug 20 23:01:31.223: INFO: Pod "pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54": Phase="Pending", Reason="", readiness=false. Elapsed: 9.116704ms Aug 20 23:01:33.227: INFO: Pod "pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012596608s Aug 20 23:01:35.346: INFO: Pod "pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13169436s STEP: Saw pod success Aug 20 23:01:35.346: INFO: Pod "pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54" satisfied condition "success or failure" Aug 20 23:01:35.349: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54 container secret-env-test: STEP: delete the pod Aug 20 23:01:35.417: INFO: Waiting for pod pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54 to disappear Aug 20 23:01:35.544: INFO: Pod pod-secrets-47e88a2e-7185-436d-9b2e-4d4a7bbffa54 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:01:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7828" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":749,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:01:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 20 23:01:35.858: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:35.883: INFO: Number of nodes with available pods: 0 Aug 20 23:01:35.883: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:01:36.888: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:36.891: INFO: Number of nodes with available pods: 0 Aug 20 23:01:36.891: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:01:38.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:38.086: INFO: Number of nodes with available pods: 0 Aug 20 23:01:38.086: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:01:38.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:38.892: INFO: Number of nodes with available pods: 0 Aug 20 23:01:38.892: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:01:39.909: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:39.913: INFO: Number of nodes with available pods: 1 Aug 20 23:01:39.913: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:01:40.888: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:40.892: INFO: Number of nodes with available pods: 2 Aug 20 23:01:40.892: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 20 23:01:40.925: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:40.951: INFO: Number of nodes with available pods: 1 Aug 20 23:01:40.951: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:41.955: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:41.959: INFO: Number of nodes with available pods: 1 Aug 20 23:01:41.959: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:42.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:42.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:42.961: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:43.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:43.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:43.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:44.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:44.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:44.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:45.955: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:45.957: INFO: Number of nodes with available pods: 1 Aug 20 23:01:45.957: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:46.964: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:46.967: INFO: Number of nodes with available pods: 1 Aug 20 23:01:46.967: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:47.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:47.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:47.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:48.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:48.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:48.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:49.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:49.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:49.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:50.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:50.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:50.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:51.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:51.960: INFO: Number of nodes with available pods: 1 Aug 20 23:01:51.960: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:52.956: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:52.959: INFO: Number of nodes with available pods: 1 Aug 20 23:01:52.959: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:54.029: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:54.033: INFO: Number of nodes with available pods: 1 Aug 20 23:01:54.033: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:54.958: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:54.961: INFO: Number of nodes with available pods: 1 Aug 20 23:01:54.961: INFO: Node jerma-worker2 is running more than one daemon pod Aug 20 23:01:55.955: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:01:55.958: INFO: Number of nodes with available pods: 2 Aug 20 23:01:55.958: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5659, will wait for the garbage collector to delete the pods Aug 20 23:01:56.017: INFO: Deleting DaemonSet.extensions daemon-set took: 4.92979ms Aug 20 23:01:56.318: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260238ms Aug 20 23:02:01.830: INFO: Number of nodes with available pods: 0 Aug 20 23:02:01.830: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 23:02:01.833: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5659/daemonsets","resourceVersion":"1945203"},"items":null} Aug 20 23:02:01.835: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5659/pods","resourceVersion":"1945203"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:01.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5659" for this suite. • [SLOW TEST:26.304 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":43,"skipped":760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:01.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:02:01.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2" in namespace "downward-api-8516" to be "success or failure" Aug 20 23:02:02.005: INFO: Pod "downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.542407ms Aug 20 23:02:04.009: INFO: Pod "downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047064849s Aug 20 23:02:06.018: INFO: Pod "downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056297715s STEP: Saw pod success Aug 20 23:02:06.018: INFO: Pod "downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2" satisfied condition "success or failure" Aug 20 23:02:06.020: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2 container client-container: STEP: delete the pod Aug 20 23:02:06.049: INFO: Waiting for pod downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2 to disappear Aug 20 23:02:06.083: INFO: Pod downwardapi-volume-ed724796-d8b7-4159-966f-3ce4bc6ebbb2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:06.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8516" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":798,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:06.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4657 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 20 23:02:06.147: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 20 23:02:30.316: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostname&protocol=http&host=10.244.2.218&port=8080&tries=1'] Namespace:pod-network-test-4657 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:02:30.316: INFO: >>> kubeConfig: /root/.kube/config I0820 23:02:30.352718 6 log.go:172] (0xc0016b4580) (0xc002f13680) Create stream I0820 23:02:30.352855 6 log.go:172] (0xc0016b4580) (0xc002f13680) Stream added, broadcasting: 1 I0820 23:02:30.354763 6 log.go:172] (0xc0016b4580) Reply frame received for 1 I0820 23:02:30.354808 6 log.go:172] (0xc0016b4580) (0xc0017b8000) Create stream I0820 23:02:30.354819 6 log.go:172] (0xc0016b4580) (0xc0017b8000) Stream added, broadcasting: 3 I0820 23:02:30.355533 6 log.go:172] (0xc0016b4580) Reply frame received for 3 I0820 23:02:30.355573 6 log.go:172] (0xc0016b4580) (0xc002f13720) Create stream I0820 23:02:30.355588 6 log.go:172] (0xc0016b4580) (0xc002f13720) Stream added, broadcasting: 5 I0820 23:02:30.356279 6 log.go:172] (0xc0016b4580) Reply frame received for 5 I0820 23:02:30.435543 6 log.go:172] (0xc0016b4580) Data frame received for 3 I0820 23:02:30.435577 6 log.go:172] (0xc0017b8000) (3) Data frame handling I0820 23:02:30.435594 6 log.go:172] (0xc0017b8000) (3) Data frame sent I0820 23:02:30.436570 6 log.go:172] (0xc0016b4580) Data frame received for 3 I0820 23:02:30.436609 6 log.go:172] (0xc0017b8000) (3) Data frame handling I0820 23:02:30.436634 6 log.go:172] (0xc0016b4580) Data frame received for 5 I0820 23:02:30.436646 6 log.go:172] (0xc002f13720) (5) Data frame handling I0820 23:02:30.438692 6 log.go:172] (0xc0016b4580) Data frame received for 1 I0820 23:02:30.438721 6 log.go:172] (0xc002f13680) (1) Data frame handling I0820 23:02:30.438731 6 log.go:172] (0xc002f13680) (1) Data frame sent I0820 23:02:30.438742 6 log.go:172] (0xc0016b4580) (0xc002f13680) Stream removed, broadcasting: 1 I0820 23:02:30.438760 6 log.go:172] (0xc0016b4580) Go away received I0820 23:02:30.439081 6 log.go:172] (0xc0016b4580) (0xc002f13680) Stream removed, broadcasting: 1 I0820 23:02:30.439095 6 log.go:172] (0xc0016b4580) (0xc0017b8000) Stream removed, broadcasting: 3 I0820 23:02:30.439101 6 log.go:172] (0xc0016b4580) (0xc002f13720) Stream removed, broadcasting: 5 Aug 20 23:02:30.439: INFO: Waiting for responses: map[] Aug 20 23:02:30.484: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.231:8080/dial?request=hostname&protocol=http&host=10.244.1.230&port=8080&tries=1'] Namespace:pod-network-test-4657 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:02:30.484: INFO: >>> kubeConfig: /root/.kube/config I0820 23:02:30.510530 6 log.go:172] (0xc002f1e630) (0xc0017b8780) Create stream I0820 23:02:30.510561 6 log.go:172] (0xc002f1e630) (0xc0017b8780) Stream added, broadcasting: 1 I0820 23:02:30.513800 6 log.go:172] (0xc002f1e630) Reply frame received for 1 I0820 23:02:30.513866 6 log.go:172] (0xc002f1e630) (0xc0024120a0) Create stream I0820 23:02:30.513884 6 log.go:172] (0xc002f1e630) (0xc0024120a0) Stream added, broadcasting: 3 I0820 23:02:30.514846 6 log.go:172] (0xc002f1e630) Reply frame received for 3 I0820 23:02:30.514874 6 log.go:172] (0xc002f1e630) (0xc0017b8a00) Create stream I0820 23:02:30.514884 6 log.go:172] (0xc002f1e630) (0xc0017b8a00) Stream added, broadcasting: 5 I0820 23:02:30.515930 6 log.go:172] (0xc002f1e630) Reply frame received for 5 I0820 23:02:30.585631 6 log.go:172] (0xc002f1e630) Data frame received for 3 I0820 23:02:30.585664 6 log.go:172] (0xc0024120a0) (3) Data frame handling I0820 23:02:30.585696 6 log.go:172] (0xc0024120a0) (3) Data frame sent I0820 23:02:30.586677 6 log.go:172] (0xc002f1e630) Data frame received for 5 I0820 23:02:30.586712 6 log.go:172] (0xc0017b8a00) (5) Data frame handling I0820 23:02:30.586731 6 log.go:172] (0xc002f1e630) Data frame received for 3 I0820 23:02:30.586736 6 log.go:172] (0xc0024120a0) (3) Data frame handling I0820 23:02:30.588384 6 log.go:172] (0xc002f1e630) Data frame received for 1 I0820 23:02:30.588403 6 log.go:172] (0xc0017b8780) (1) Data frame handling I0820 23:02:30.588415 6 log.go:172] (0xc0017b8780) (1) Data frame sent I0820 23:02:30.588531 6 log.go:172] (0xc002f1e630) (0xc0017b8780) Stream removed, broadcasting: 1 I0820 23:02:30.588592 6 log.go:172] (0xc002f1e630) (0xc0017b8780) Stream removed, broadcasting: 1 I0820 23:02:30.588605 6 log.go:172] (0xc002f1e630) (0xc0024120a0) Stream removed, broadcasting: 3 I0820 23:02:30.588615 6 log.go:172] (0xc002f1e630) (0xc0017b8a00) Stream removed, broadcasting: 5 Aug 20 23:02:30.588: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:30.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0820 23:02:30.588713 6 log.go:172] (0xc002f1e630) Go away received STEP: Destroying namespace "pod-network-test-4657" for this suite. • [SLOW TEST:24.530 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":817,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:30.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:47.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2912" for this suite. • [SLOW TEST:17.114 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":46,"skipped":823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:47.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f3496558-87ac-44f3-82c9-f65ed789e59f STEP: Creating a pod to test consume configMaps Aug 20 23:02:47.971: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5" in namespace "configmap-3652" to be "success or failure" Aug 20 23:02:47.974: INFO: Pod "pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435219ms Aug 20 23:02:49.978: INFO: Pod "pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007091586s Aug 20 23:02:51.982: INFO: Pod "pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010945457s STEP: Saw pod success Aug 20 23:02:51.982: INFO: Pod "pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5" satisfied condition "success or failure" Aug 20 23:02:51.996: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5 container configmap-volume-test: STEP: delete the pod Aug 20 23:02:52.012: INFO: Waiting for pod pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5 to disappear Aug 20 23:02:52.063: INFO: Pod pod-configmaps-1e98c0d3-8307-450d-8e16-bca917216fc5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:52.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3652" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":904,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:52.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:02:52.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9917" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":48,"skipped":908,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:02:52.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:07.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1370" for this suite. STEP: Destroying namespace "nsdeletetest-5913" for this suite. Aug 20 23:03:07.482: INFO: Namespace nsdeletetest-5913 was already deleted STEP: Destroying namespace "nsdeletetest-3312" for this suite. • [SLOW TEST:15.290 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":49,"skipped":922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:07.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:03:07.586: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 20 23:03:07.593: INFO: Number of nodes with available pods: 0 Aug 20 23:03:07.593: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 20 23:03:07.703: INFO: Number of nodes with available pods: 0 Aug 20 23:03:07.703: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:08.707: INFO: Number of nodes with available pods: 0 Aug 20 23:03:08.707: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:09.718: INFO: Number of nodes with available pods: 0 Aug 20 23:03:09.718: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:10.713: INFO: Number of nodes with available pods: 1 Aug 20 23:03:10.713: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 20 23:03:10.835: INFO: Number of nodes with available pods: 1 Aug 20 23:03:10.835: INFO: Number of running nodes: 0, number of available pods: 1 Aug 20 23:03:11.839: INFO: Number of nodes with available pods: 0 Aug 20 23:03:11.839: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 20 23:03:11.852: INFO: Number of nodes with available pods: 0 Aug 20 23:03:11.852: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:12.855: INFO: Number of nodes with available pods: 0 Aug 20 23:03:12.855: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:13.856: INFO: Number of nodes with available pods: 0 Aug 20 23:03:13.856: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:14.855: INFO: Number of nodes with available pods: 0 Aug 20 23:03:14.855: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:15.855: INFO: Number of nodes with available pods: 0 Aug 20 23:03:15.855: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:16.855: INFO: Number of nodes with available pods: 0 Aug 20 23:03:16.855: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:17.856: INFO: Number of nodes with available pods: 0 Aug 20 23:03:17.856: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:03:18.856: INFO: Number of nodes with available pods: 1 Aug 20 23:03:18.856: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2871, will wait for the garbage collector to delete the pods Aug 20 23:03:18.920: INFO: Deleting DaemonSet.extensions daemon-set took: 6.443756ms Aug 20 23:03:19.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244898ms Aug 20 23:03:31.646: INFO: Number of nodes with available pods: 0 Aug 20 23:03:31.646: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 23:03:31.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2871/daemonsets","resourceVersion":"1945935"},"items":null} Aug 20 23:03:31.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2871/pods","resourceVersion":"1945935"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:31.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2871" for this suite. • [SLOW TEST:24.201 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":50,"skipped":958,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:31.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Aug 20 23:03:31.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5053 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 20 23:03:34.886: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0820 23:03:34.796532 87 log.go:172] (0xc00010fce0) (0xc00060e280) Create stream\nI0820 23:03:34.796595 87 log.go:172] (0xc00010fce0) (0xc00060e280) Stream added, broadcasting: 1\nI0820 23:03:34.799423 87 log.go:172] (0xc00010fce0) Reply frame received for 1\nI0820 23:03:34.799475 87 log.go:172] (0xc00010fce0) (0xc0003d80a0) Create stream\nI0820 23:03:34.799491 87 log.go:172] (0xc00010fce0) (0xc0003d80a0) Stream added, broadcasting: 3\nI0820 23:03:34.800687 87 log.go:172] (0xc00010fce0) Reply frame received for 3\nI0820 23:03:34.800857 87 log.go:172] (0xc00010fce0) (0xc0003d8140) Create stream\nI0820 23:03:34.800882 87 log.go:172] (0xc00010fce0) (0xc0003d8140) Stream added, broadcasting: 5\nI0820 23:03:34.801991 87 log.go:172] (0xc00010fce0) Reply frame received for 5\nI0820 23:03:34.802014 87 log.go:172] (0xc00010fce0) (0xc0003d81e0) Create stream\nI0820 23:03:34.802022 87 log.go:172] (0xc00010fce0) (0xc0003d81e0) Stream added, broadcasting: 7\nI0820 23:03:34.803056 87 log.go:172] (0xc00010fce0) Reply frame received for 7\nI0820 23:03:34.803206 87 log.go:172] (0xc0003d80a0) (3) Writing data frame\nI0820 23:03:34.803387 87 log.go:172] (0xc0003d80a0) (3) Writing data frame\nI0820 23:03:34.804290 87 log.go:172] (0xc00010fce0) Data frame received for 5\nI0820 23:03:34.804327 87 log.go:172] (0xc0003d8140) (5) Data frame handling\nI0820 23:03:34.804364 87 log.go:172] (0xc0003d8140) (5) Data frame sent\nI0820 23:03:34.805319 87 log.go:172] (0xc00010fce0) Data frame received for 5\nI0820 23:03:34.805342 87 log.go:172] (0xc0003d8140) (5) Data frame handling\nI0820 23:03:34.805361 87 log.go:172] (0xc0003d8140) (5) Data frame sent\nI0820 23:03:34.855780 87 log.go:172] (0xc00010fce0) Data frame received for 5\nI0820 23:03:34.855817 87 log.go:172] (0xc0003d8140) (5) Data frame handling\nI0820 23:03:34.856101 87 log.go:172] (0xc00010fce0) Data frame received for 7\nI0820 23:03:34.856135 87 log.go:172] (0xc0003d81e0) (7) Data frame handling\nI0820 23:03:34.856293 87 log.go:172] (0xc00010fce0) Data frame received for 1\nI0820 23:03:34.856346 87 log.go:172] (0xc00060e280) (1) Data frame handling\nI0820 23:03:34.856378 87 log.go:172] (0xc00060e280) (1) Data frame sent\nI0820 23:03:34.856405 87 log.go:172] (0xc00010fce0) (0xc00060e280) Stream removed, broadcasting: 1\nI0820 23:03:34.856464 87 log.go:172] (0xc00010fce0) (0xc0003d80a0) Stream removed, broadcasting: 3\nI0820 23:03:34.856664 87 log.go:172] (0xc00010fce0) Go away received\nI0820 23:03:34.857035 87 log.go:172] (0xc00010fce0) (0xc00060e280) Stream removed, broadcasting: 1\nI0820 23:03:34.857065 87 log.go:172] (0xc00010fce0) (0xc0003d80a0) Stream removed, broadcasting: 3\nI0820 23:03:34.857078 87 log.go:172] (0xc00010fce0) (0xc0003d8140) Stream removed, broadcasting: 5\nI0820 23:03:34.857089 87 log.go:172] (0xc00010fce0) (0xc0003d81e0) Stream removed, broadcasting: 7\n" Aug 20 23:03:34.886: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:36.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5053" for this suite. • [SLOW TEST:5.232 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843 should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":51,"skipped":977,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:36.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 20 23:03:36.993: INFO: Waiting up to 5m0s for pod "downward-api-32e60c69-a187-424c-b38a-f27094156a36" in namespace "downward-api-1865" to be "success or failure" Aug 20 23:03:37.099: INFO: Pod "downward-api-32e60c69-a187-424c-b38a-f27094156a36": Phase="Pending", Reason="", readiness=false. Elapsed: 106.050415ms Aug 20 23:03:39.103: INFO: Pod "downward-api-32e60c69-a187-424c-b38a-f27094156a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110235438s Aug 20 23:03:41.107: INFO: Pod "downward-api-32e60c69-a187-424c-b38a-f27094156a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114499329s STEP: Saw pod success Aug 20 23:03:41.107: INFO: Pod "downward-api-32e60c69-a187-424c-b38a-f27094156a36" satisfied condition "success or failure" Aug 20 23:03:41.111: INFO: Trying to get logs from node jerma-worker2 pod downward-api-32e60c69-a187-424c-b38a-f27094156a36 container dapi-container: STEP: delete the pod Aug 20 23:03:41.170: INFO: Waiting for pod downward-api-32e60c69-a187-424c-b38a-f27094156a36 to disappear Aug 20 23:03:41.182: INFO: Pod downward-api-32e60c69-a187-424c-b38a-f27094156a36 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:41.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1865" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:41.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:03:41.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6176' Aug 20 23:03:41.578: INFO: stderr: "" Aug 20 23:03:41.578: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 20 23:03:41.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6176' Aug 20 23:03:41.840: INFO: stderr: "" Aug 20 23:03:41.840: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 20 23:03:42.844: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:03:42.845: INFO: Found 0 / 1 Aug 20 23:03:43.844: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:03:43.844: INFO: Found 0 / 1 Aug 20 23:03:44.844: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:03:44.844: INFO: Found 1 / 1 Aug 20 23:03:44.844: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 20 23:03:44.859: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:03:44.860: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 20 23:03:44.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-458j2 --namespace=kubectl-6176' Aug 20 23:03:44.977: INFO: stderr: "" Aug 20 23:03:44.977: INFO: stdout: "Name: agnhost-master-458j2\nNamespace: kubectl-6176\nPriority: 0\nNode: jerma-worker/172.18.0.6\nStart Time: Thu, 20 Aug 2020 23:03:41 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.224\nIPs:\n IP: 10.244.2.224\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7dce5acc1aa4035c2e87c09a5e2296de32defc4270e785c11695c1050e0c1fda\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 20 Aug 2020 23:03:43 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-2n4vz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-2n4vz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-2n4vz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-6176/agnhost-master-458j2 to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 0s kubelet, jerma-worker Started container agnhost-master\n" Aug 20 23:03:44.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6176' Aug 20 23:03:45.106: INFO: stderr: "" Aug 20 23:03:45.106: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6176\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-458j2\n" Aug 20 23:03:45.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6176' Aug 20 23:03:45.217: INFO: stderr: "" Aug 20 23:03:45.217: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6176\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.101.230.125\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.224:6379\nSession Affinity: None\nEvents: \n" Aug 20 23:03:45.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Aug 20 23:03:45.345: INFO: stderr: "" Aug 20 23:03:45.345: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:37:06 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 20 Aug 2020 23:03:43 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 20 Aug 2020 23:02:05 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 20 Aug 2020 23:02:05 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 20 Aug 2020 23:02:05 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 20 Aug 2020 23:02:05 +0000 Sat, 15 Aug 2020 09:37:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.10\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e52c45bc589d48d995e8fd79ad5bf250\n System UUID: b981bdc7-d264-48ef-ab5e-3308e23aaf13\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.17.5\n Kube-Proxy Version: v1.17.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-bvrm4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d13h\n kube-system coredns-6955765f44-db8rh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d13h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d13h\n kube-system kindnet-j88mt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d13h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d13h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d13h\n kube-system kube-proxy-hmb6l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d13h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d13h\n local-path-storage local-path-provisioner-58f6947c7-p2cqw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d13h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 20 23:03:45.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6176' Aug 20 23:03:45.453: INFO: stderr: "" Aug 20 23:03:45.453: INFO: stdout: "Name: kubectl-6176\nLabels: e2e-framework=kubectl\n e2e-run=1e3aa255-819f-4cf1-9e7c-20a3bc22599f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:45.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6176" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":53,"skipped":1018,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:45.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:03:45.500: INFO: Creating deployment "test-recreate-deployment" Aug 20 23:03:45.510: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 20 23:03:45.575: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 20 23:03:47.662: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 20 23:03:47.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561425, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561425, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561425, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561425, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 23:03:49.668: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 20 23:03:49.675: INFO: Updating deployment test-recreate-deployment Aug 20 23:03:49.675: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 20 23:03:49.900: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7317 /apis/apps/v1/namespaces/deployment-7317/deployments/test-recreate-deployment a0a21707-af7a-480e-92d9-efc42beb4d21 1946130 2 2020-08-20 23:03:45 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000efaac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-20 23:03:49 +0000 UTC,LastTransitionTime:2020-08-20 23:03:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-20 23:03:49 +0000 UTC,LastTransitionTime:2020-08-20 23:03:45 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 20 23:03:49.986: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7317 /apis/apps/v1/namespaces/deployment-7317/replicasets/test-recreate-deployment-5f94c574ff 16b1b401-4fd1-41d9-9074-20255a56a113 1946128 1 2020-08-20 23:03:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a0a21707-af7a-480e-92d9-efc42beb4d21 0xc000469687 0xc000469688}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0004697c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 20 23:03:49.986: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 20 23:03:49.987: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7317 /apis/apps/v1/namespaces/deployment-7317/replicasets/test-recreate-deployment-799c574856 77bb3f9a-fa15-488d-a786-2e651f72cad6 1946119 2 2020-08-20 23:03:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a0a21707-af7a-480e-92d9-efc42beb4d21 0xc000469947 0xc000469948}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000469ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 20 23:03:50.085: INFO: Pod "test-recreate-deployment-5f94c574ff-n2m7j" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-n2m7j test-recreate-deployment-5f94c574ff- deployment-7317 /api/v1/namespaces/deployment-7317/pods/test-recreate-deployment-5f94c574ff-n2m7j d996bc6f-d0f5-46d4-9b69-7bbd41a47e35 1946131 0 2020-08-20 23:03:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 16b1b401-4fd1-41d9-9074-20255a56a113 0xc000582d07 0xc000582d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wvrtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wvrtx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wvrtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:03:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:03:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:03:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:03:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:03:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:50.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7317" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":54,"skipped":1030,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:50.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9970.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9970.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9970.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9970.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9970.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9970.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:03:58.410: INFO: DNS probes using dns-9970/dns-test-c4adde05-460a-48e8-b2f0-ac70c993491f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:58.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9970" for this suite. • [SLOW TEST:8.481 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":55,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:58.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:03:59.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7776" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1059,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:03:59.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Aug 20 23:03:59.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-669' Aug 20 23:03:59.576: INFO: stderr: "" Aug 20 23:03:59.576: INFO: stdout: "pod/pause created\n" Aug 20 23:03:59.576: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 20 23:03:59.576: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-669" to be "running and ready" Aug 20 23:03:59.583: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307056ms Aug 20 23:04:01.605: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02852673s Aug 20 23:04:03.609: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032056617s Aug 20 23:04:05.613: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.036260533s Aug 20 23:04:05.613: INFO: Pod "pause" satisfied condition "running and ready" Aug 20 23:04:05.613: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Aug 20 23:04:05.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-669' Aug 20 23:04:05.716: INFO: stderr: "" Aug 20 23:04:05.716: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 20 23:04:05.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-669' Aug 20 23:04:05.812: INFO: stderr: "" Aug 20 23:04:05.812: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 20 23:04:05.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-669' Aug 20 23:04:05.912: INFO: stderr: "" Aug 20 23:04:05.912: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 20 23:04:05.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-669' Aug 20 23:04:06.055: INFO: stderr: "" Aug 20 23:04:06.055: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Aug 20 23:04:06.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-669' Aug 20 23:04:06.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:06.204: INFO: stdout: "pod \"pause\" force deleted\n" Aug 20 23:04:06.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-669' Aug 20 23:04:06.300: INFO: stderr: "No resources found in kubectl-669 namespace.\n" Aug 20 23:04:06.300: INFO: stdout: "" Aug 20 23:04:06.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-669 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 23:04:06.393: INFO: stderr: "" Aug 20 23:04:06.393: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:06.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-669" for this suite. • [SLOW TEST:7.279 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":57,"skipped":1068,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:06.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Aug 20 23:04:06.693: INFO: Waiting up to 5m0s for pod "client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a" in namespace "containers-7526" to be "success or failure" Aug 20 23:04:06.725: INFO: Pod "client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.504121ms Aug 20 23:04:08.729: INFO: Pod "client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035602216s Aug 20 23:04:10.733: INFO: Pod "client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039293234s STEP: Saw pod success Aug 20 23:04:10.733: INFO: Pod "client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a" satisfied condition "success or failure" Aug 20 23:04:10.735: INFO: Trying to get logs from node jerma-worker pod client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a container test-container: STEP: delete the pod Aug 20 23:04:10.798: INFO: Waiting for pod client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a to disappear Aug 20 23:04:10.807: INFO: Pod client-containers-7514976e-5358-4909-bf05-bc1b1a871a8a no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:10.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7526" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1083,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:10.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0497048b-b19c-4baa-8392-3cf3ddaab356 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0497048b-b19c-4baa-8392-3cf3ddaab356 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:16.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7017" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:17.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Aug 20 23:04:17.072: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Aug 20 23:04:17.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:17.436: INFO: stderr: "" Aug 20 23:04:17.436: INFO: stdout: "service/agnhost-slave created\n" Aug 20 23:04:17.436: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Aug 20 23:04:17.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:17.712: INFO: stderr: "" Aug 20 23:04:17.712: INFO: stdout: "service/agnhost-master created\n" Aug 20 23:04:17.713: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 20 23:04:17.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:17.999: INFO: stderr: "" Aug 20 23:04:17.999: INFO: stdout: "service/frontend created\n" Aug 20 23:04:18.000: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 20 23:04:18.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:18.243: INFO: stderr: "" Aug 20 23:04:18.243: INFO: stdout: "deployment.apps/frontend created\n" Aug 20 23:04:18.244: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 20 23:04:18.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:18.506: INFO: stderr: "" Aug 20 23:04:18.506: INFO: stdout: "deployment.apps/agnhost-master created\n" Aug 20 23:04:18.506: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 20 23:04:18.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2035' Aug 20 23:04:18.781: INFO: stderr: "" Aug 20 23:04:18.781: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Aug 20 23:04:18.781: INFO: Waiting for all frontend pods to be Running. Aug 20 23:04:28.832: INFO: Waiting for frontend to serve content. Aug 20 23:04:28.842: INFO: Trying to add a new entry to the guestbook. Aug 20 23:04:28.854: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 20 23:04:28.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.007: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Aug 20 23:04:29.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.158: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 20 23:04:29.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.349: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.349: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 20 23:04:29.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.445: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.445: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 20 23:04:29.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.543: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 20 23:04:29.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2035' Aug 20 23:04:29.663: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:04:29.663: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:29.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2035" for this suite. • [SLOW TEST:12.670 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381 should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":60,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:29.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:30.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6768" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":61,"skipped":1180,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:30.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 20 23:04:30.959: INFO: Waiting up to 5m0s for pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8" in namespace "emptydir-9718" to be "success or failure" Aug 20 23:04:31.041: INFO: Pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 81.32188ms Aug 20 23:04:33.054: INFO: Pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094603682s Aug 20 23:04:35.096: INFO: Pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1360697s Aug 20 23:04:37.099: INFO: Pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139662143s STEP: Saw pod success Aug 20 23:04:37.099: INFO: Pod "pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8" satisfied condition "success or failure" Aug 20 23:04:37.102: INFO: Trying to get logs from node jerma-worker pod pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8 container test-container: STEP: delete the pod Aug 20 23:04:37.233: INFO: Waiting for pod pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8 to disappear Aug 20 23:04:37.245: INFO: Pod pod-014ebd61-ed10-48b2-bae6-7100d89b1ac8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:37.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9718" for this suite. • [SLOW TEST:6.727 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1197,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 20 23:04:37.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1606' Aug 20 23:04:37.432: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 20 23:04:37.432: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Aug 20 23:04:37.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1606' Aug 20 23:04:37.540: INFO: stderr: "" Aug 20 23:04:37.540: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:37.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1606" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":63,"skipped":1201,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:37.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b0c628a8-89a1-4356-b2ff-b23dfe91ef0f STEP: Creating secret with name s-test-opt-upd-d8c22484-8655-4dfb-9214-1a8ac9ee713c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b0c628a8-89a1-4356-b2ff-b23dfe91ef0f STEP: Updating secret s-test-opt-upd-d8c22484-8655-4dfb-9214-1a8ac9ee713c STEP: Creating secret with name s-test-opt-create-857f1242-a520-4980-b3f6-a8fc2f84b3ae STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:45.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4291" for this suite. • [SLOW TEST:8.277 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1209,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:45.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Aug 20 23:04:45.907: INFO: namespace kubectl-235 Aug 20 23:04:45.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-235' Aug 20 23:04:46.144: INFO: stderr: "" Aug 20 23:04:46.144: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 20 23:04:47.148: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:04:47.148: INFO: Found 0 / 1 Aug 20 23:04:48.149: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:04:48.149: INFO: Found 0 / 1 Aug 20 23:04:49.149: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:04:49.149: INFO: Found 1 / 1 Aug 20 23:04:49.149: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 20 23:04:49.152: INFO: Selector matched 1 pods for map[app:agnhost] Aug 20 23:04:49.152: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 20 23:04:49.152: INFO: wait on agnhost-master startup in kubectl-235 Aug 20 23:04:49.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-66t74 agnhost-master --namespace=kubectl-235' Aug 20 23:04:49.264: INFO: stderr: "" Aug 20 23:04:49.264: INFO: stdout: "Paused\n" STEP: exposing RC Aug 20 23:04:49.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-235' Aug 20 23:04:49.442: INFO: stderr: "" Aug 20 23:04:49.442: INFO: stdout: "service/rm2 exposed\n" Aug 20 23:04:49.446: INFO: Service rm2 in namespace kubectl-235 found. STEP: exposing service Aug 20 23:04:51.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-235' Aug 20 23:04:51.615: INFO: stderr: "" Aug 20 23:04:51.615: INFO: stdout: "service/rm3 exposed\n" Aug 20 23:04:51.720: INFO: Service rm3 in namespace kubectl-235 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:04:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-235" for this suite. • [SLOW TEST:7.870 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":65,"skipped":1213,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:04:53.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0820 23:05:24.370594 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 23:05:24.370: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:05:24.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7775" for this suite. • [SLOW TEST:30.648 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":66,"skipped":1220,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:05:24.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-6986 STEP: creating replication controller nodeport-test in namespace services-6986 I0820 23:05:24.509506 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6986, replica count: 2 I0820 23:05:27.559915 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 23:05:30.560121 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 23:05:30.560: INFO: Creating new exec pod Aug 20 23:05:35.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6986 execpodg8bz9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 20 23:05:36.037: INFO: stderr: "I0820 23:05:35.743129 799 log.go:172] (0xc000afef20) (0xc000ade780) Create stream\nI0820 23:05:35.743176 799 log.go:172] (0xc000afef20) (0xc000ade780) Stream added, broadcasting: 1\nI0820 23:05:35.748136 799 log.go:172] (0xc000afef20) Reply frame received for 1\nI0820 23:05:35.748197 799 log.go:172] (0xc000afef20) (0xc000609c20) Create stream\nI0820 23:05:35.748217 799 log.go:172] (0xc000afef20) (0xc000609c20) Stream added, broadcasting: 3\nI0820 23:05:35.749621 799 log.go:172] (0xc000afef20) Reply frame received for 3\nI0820 23:05:35.749672 799 log.go:172] (0xc000afef20) (0xc000609cc0) Create stream\nI0820 23:05:35.749697 799 log.go:172] (0xc000afef20) (0xc000609cc0) Stream added, broadcasting: 5\nI0820 23:05:35.750637 799 log.go:172] (0xc000afef20) Reply frame received for 5\nI0820 23:05:36.026111 799 log.go:172] (0xc000afef20) Data frame received for 5\nI0820 23:05:36.026140 799 log.go:172] (0xc000609cc0) (5) Data frame handling\nI0820 23:05:36.026243 799 log.go:172] (0xc000609cc0) (5) Data frame sent\nI0820 23:05:36.026259 799 log.go:172] (0xc000afef20) Data frame received for 5\nI0820 23:05:36.026270 799 log.go:172] (0xc000609cc0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0820 23:05:36.026290 799 log.go:172] (0xc000609cc0) (5) Data frame sent\nI0820 23:05:36.026614 799 log.go:172] (0xc000afef20) Data frame received for 3\nI0820 23:05:36.026645 799 log.go:172] (0xc000609c20) (3) Data frame handling\nI0820 23:05:36.027280 799 log.go:172] (0xc000afef20) Data frame received for 5\nI0820 23:05:36.027297 799 log.go:172] (0xc000609cc0) (5) Data frame handling\nI0820 23:05:36.028924 799 log.go:172] (0xc000afef20) Data frame received for 1\nI0820 23:05:36.028941 799 log.go:172] (0xc000ade780) (1) Data frame handling\nI0820 23:05:36.028947 799 log.go:172] (0xc000ade780) (1) Data frame sent\nI0820 23:05:36.028957 799 log.go:172] (0xc000afef20) (0xc000ade780) Stream removed, broadcasting: 1\nI0820 23:05:36.028970 799 log.go:172] (0xc000afef20) Go away received\nI0820 23:05:36.029418 799 log.go:172] (0xc000afef20) (0xc000ade780) Stream removed, broadcasting: 1\nI0820 23:05:36.029446 799 log.go:172] (0xc000afef20) (0xc000609c20) Stream removed, broadcasting: 3\nI0820 23:05:36.029464 799 log.go:172] (0xc000afef20) (0xc000609cc0) Stream removed, broadcasting: 5\n" Aug 20 23:05:36.037: INFO: stdout: "" Aug 20 23:05:36.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6986 execpodg8bz9 -- /bin/sh -x -c nc -zv -t -w 2 10.108.207.221 80' Aug 20 23:05:36.228: INFO: stderr: "I0820 23:05:36.160821 821 log.go:172] (0xc0009ba630) (0xc0006d1ae0) Create stream\nI0820 23:05:36.160876 821 log.go:172] (0xc0009ba630) (0xc0006d1ae0) Stream added, broadcasting: 1\nI0820 23:05:36.165888 821 log.go:172] (0xc0009ba630) Reply frame received for 1\nI0820 23:05:36.165944 821 log.go:172] (0xc0009ba630) (0xc00060c500) Create stream\nI0820 23:05:36.165979 821 log.go:172] (0xc0009ba630) (0xc00060c500) Stream added, broadcasting: 3\nI0820 23:05:36.168720 821 log.go:172] (0xc0009ba630) Reply frame received for 3\nI0820 23:05:36.168943 821 log.go:172] (0xc0009ba630) (0xc000a70000) Create stream\nI0820 23:05:36.168999 821 log.go:172] (0xc0009ba630) (0xc000a70000) Stream added, broadcasting: 5\nI0820 23:05:36.171239 821 log.go:172] (0xc0009ba630) Reply frame received for 5\nI0820 23:05:36.218766 821 log.go:172] (0xc0009ba630) Data frame received for 5\nI0820 23:05:36.218818 821 log.go:172] (0xc0009ba630) Data frame received for 3\nI0820 23:05:36.218864 821 log.go:172] (0xc00060c500) (3) Data frame handling\nI0820 23:05:36.218907 821 log.go:172] (0xc000a70000) (5) Data frame handling\nI0820 23:05:36.218923 821 log.go:172] (0xc000a70000) (5) Data frame sent\nI0820 23:05:36.218932 821 log.go:172] (0xc0009ba630) Data frame received for 5\nI0820 23:05:36.218939 821 log.go:172] (0xc000a70000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.207.221 80\nConnection to 10.108.207.221 80 port [tcp/http] succeeded!\nI0820 23:05:36.220358 821 log.go:172] (0xc0009ba630) Data frame received for 1\nI0820 23:05:36.220379 821 log.go:172] (0xc0006d1ae0) (1) Data frame handling\nI0820 23:05:36.220400 821 log.go:172] (0xc0006d1ae0) (1) Data frame sent\nI0820 23:05:36.220414 821 log.go:172] (0xc0009ba630) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0820 23:05:36.220472 821 log.go:172] (0xc0009ba630) Go away received\nI0820 23:05:36.220704 821 log.go:172] (0xc0009ba630) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0820 23:05:36.220717 821 log.go:172] (0xc0009ba630) (0xc00060c500) Stream removed, broadcasting: 3\nI0820 23:05:36.220819 821 log.go:172] (0xc0009ba630) (0xc000a70000) Stream removed, broadcasting: 5\n" Aug 20 23:05:36.228: INFO: stdout: "" Aug 20 23:05:36.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6986 execpodg8bz9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31275' Aug 20 23:05:36.415: INFO: stderr: "I0820 23:05:36.351841 842 log.go:172] (0xc0000f6e70) (0xc000976000) Create stream\nI0820 23:05:36.351898 842 log.go:172] (0xc0000f6e70) (0xc000976000) Stream added, broadcasting: 1\nI0820 23:05:36.354251 842 log.go:172] (0xc0000f6e70) Reply frame received for 1\nI0820 23:05:36.354281 842 log.go:172] (0xc0000f6e70) (0xc000681ae0) Create stream\nI0820 23:05:36.354290 842 log.go:172] (0xc0000f6e70) (0xc000681ae0) Stream added, broadcasting: 3\nI0820 23:05:36.355030 842 log.go:172] (0xc0000f6e70) Reply frame received for 3\nI0820 23:05:36.355079 842 log.go:172] (0xc0000f6e70) (0xc000681cc0) Create stream\nI0820 23:05:36.355101 842 log.go:172] (0xc0000f6e70) (0xc000681cc0) Stream added, broadcasting: 5\nI0820 23:05:36.355794 842 log.go:172] (0xc0000f6e70) Reply frame received for 5\nI0820 23:05:36.403068 842 log.go:172] (0xc0000f6e70) Data frame received for 3\nI0820 23:05:36.403092 842 log.go:172] (0xc000681ae0) (3) Data frame handling\nI0820 23:05:36.403108 842 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0820 23:05:36.403114 842 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0820 23:05:36.403121 842 log.go:172] (0xc000681cc0) (5) Data frame sent\nI0820 23:05:36.403127 842 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0820 23:05:36.403132 842 log.go:172] (0xc000681cc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31275\nConnection to 172.18.0.6 31275 port [tcp/31275] succeeded!\nI0820 23:05:36.404625 842 log.go:172] (0xc0000f6e70) Data frame received for 1\nI0820 23:05:36.404646 842 log.go:172] (0xc000976000) (1) Data frame handling\nI0820 23:05:36.404660 842 log.go:172] (0xc000976000) (1) Data frame sent\nI0820 23:05:36.404685 842 log.go:172] (0xc0000f6e70) (0xc000976000) Stream removed, broadcasting: 1\nI0820 23:05:36.405002 842 log.go:172] (0xc0000f6e70) Go away received\nI0820 23:05:36.405140 842 log.go:172] (0xc0000f6e70) (0xc000976000) Stream removed, broadcasting: 1\nI0820 23:05:36.405163 842 log.go:172] (0xc0000f6e70) (0xc000681ae0) Stream removed, broadcasting: 3\nI0820 23:05:36.405176 842 log.go:172] (0xc0000f6e70) (0xc000681cc0) Stream removed, broadcasting: 5\n" Aug 20 23:05:36.415: INFO: stdout: "" Aug 20 23:05:36.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6986 execpodg8bz9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31275' Aug 20 23:05:36.630: INFO: stderr: "I0820 23:05:36.546047 863 log.go:172] (0xc0003c0d10) (0xc0006c7ae0) Create stream\nI0820 23:05:36.546106 863 log.go:172] (0xc0003c0d10) (0xc0006c7ae0) Stream added, broadcasting: 1\nI0820 23:05:36.548577 863 log.go:172] (0xc0003c0d10) Reply frame received for 1\nI0820 23:05:36.548610 863 log.go:172] (0xc0003c0d10) (0xc000902000) Create stream\nI0820 23:05:36.548624 863 log.go:172] (0xc0003c0d10) (0xc000902000) Stream added, broadcasting: 3\nI0820 23:05:36.549543 863 log.go:172] (0xc0003c0d10) Reply frame received for 3\nI0820 23:05:36.549567 863 log.go:172] (0xc0003c0d10) (0xc0006c7cc0) Create stream\nI0820 23:05:36.549575 863 log.go:172] (0xc0003c0d10) (0xc0006c7cc0) Stream added, broadcasting: 5\nI0820 23:05:36.550424 863 log.go:172] (0xc0003c0d10) Reply frame received for 5\nI0820 23:05:36.619164 863 log.go:172] (0xc0003c0d10) Data frame received for 3\nI0820 23:05:36.619207 863 log.go:172] (0xc000902000) (3) Data frame handling\nI0820 23:05:36.619257 863 log.go:172] (0xc0003c0d10) Data frame received for 5\nI0820 23:05:36.619276 863 log.go:172] (0xc0006c7cc0) (5) Data frame handling\nI0820 23:05:36.619297 863 log.go:172] (0xc0006c7cc0) (5) Data frame sent\nI0820 23:05:36.619315 863 log.go:172] (0xc0003c0d10) Data frame received for 5\nI0820 23:05:36.619333 863 log.go:172] (0xc0006c7cc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 31275\nConnection to 172.18.0.3 31275 port [tcp/31275] succeeded!\nI0820 23:05:36.621212 863 log.go:172] (0xc0003c0d10) Data frame received for 1\nI0820 23:05:36.621229 863 log.go:172] (0xc0006c7ae0) (1) Data frame handling\nI0820 23:05:36.621236 863 log.go:172] (0xc0006c7ae0) (1) Data frame sent\nI0820 23:05:36.621247 863 log.go:172] (0xc0003c0d10) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0820 23:05:36.621537 863 log.go:172] (0xc0003c0d10) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0820 23:05:36.621556 863 log.go:172] (0xc0003c0d10) (0xc000902000) Stream removed, broadcasting: 3\nI0820 23:05:36.621564 863 log.go:172] (0xc0003c0d10) (0xc0006c7cc0) Stream removed, broadcasting: 5\n" Aug 20 23:05:36.631: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:05:36.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6986" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.257 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":67,"skipped":1234,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:05:36.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:05:40.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7848" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1242,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:05:40.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3dc439b1-3cd0-40b5-8eee-7986a6ce97c9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3dc439b1-3cd0-40b5-8eee-7986a6ce97c9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:07.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2341" for this suite. • [SLOW TEST:86.568 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1248,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:07.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-747.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-747.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:07:13.515: INFO: DNS probes using dns-747/dns-test-292e6dcc-5d00-4ca9-81a3-c4f03d98c462 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:13.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-747" for this suite. • [SLOW TEST:6.304 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":70,"skipped":1257,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:13.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 20 23:07:13.853: INFO: Waiting up to 5m0s for pod "pod-4cf3f82b-d596-4406-bbb3-f5437844e77b" in namespace "emptydir-2126" to be "success or failure" Aug 20 23:07:13.888: INFO: Pod "pod-4cf3f82b-d596-4406-bbb3-f5437844e77b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.369232ms Aug 20 23:07:16.015: INFO: Pod "pod-4cf3f82b-d596-4406-bbb3-f5437844e77b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161796022s Aug 20 23:07:18.017: INFO: Pod "pod-4cf3f82b-d596-4406-bbb3-f5437844e77b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164293194s STEP: Saw pod success Aug 20 23:07:18.017: INFO: Pod "pod-4cf3f82b-d596-4406-bbb3-f5437844e77b" satisfied condition "success or failure" Aug 20 23:07:18.019: INFO: Trying to get logs from node jerma-worker pod pod-4cf3f82b-d596-4406-bbb3-f5437844e77b container test-container: STEP: delete the pod Aug 20 23:07:18.051: INFO: Waiting for pod pod-4cf3f82b-d596-4406-bbb3-f5437844e77b to disappear Aug 20 23:07:18.062: INFO: Pod pod-4cf3f82b-d596-4406-bbb3-f5437844e77b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:18.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2126" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1263,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:18.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:29.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4429" for this suite. • [SLOW TEST:11.272 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":72,"skipped":1272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:29.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 20 23:07:33.964: INFO: Successfully updated pod "annotationupdate3e4a75f9-8a3a-40da-bae5-f02fe99b2e86" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:37.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2728" for this suite. • [SLOW TEST:8.655 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:38.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 20 23:07:42.590: INFO: Successfully updated pod "pod-update-activedeadlineseconds-362cedfd-9651-4b8a-8fb9-15a135f0a6c3" Aug 20 23:07:42.590: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-362cedfd-9651-4b8a-8fb9-15a135f0a6c3" in namespace "pods-2790" to be "terminated due to deadline exceeded" Aug 20 23:07:42.595: INFO: Pod "pod-update-activedeadlineseconds-362cedfd-9651-4b8a-8fb9-15a135f0a6c3": Phase="Running", Reason="", readiness=true. Elapsed: 5.674525ms Aug 20 23:07:44.600: INFO: Pod "pod-update-activedeadlineseconds-362cedfd-9651-4b8a-8fb9-15a135f0a6c3": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009890584s Aug 20 23:07:44.600: INFO: Pod "pod-update-activedeadlineseconds-362cedfd-9651-4b8a-8fb9-15a135f0a6c3" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:44.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2790" for this suite. • [SLOW TEST:6.635 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:44.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:07:45.081: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:07:47.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561665, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561665, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561665, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561665, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:07:50.146: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:07:50.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5066" for this suite. STEP: Destroying namespace "webhook-5066-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.057 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":75,"skipped":1361,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:07:50.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:08:50.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-46" for this suite. • [SLOW TEST:60.103 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1371,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:08:50.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:08:51.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:08:53.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561731, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561731, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561731, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561731, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:08:56.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:08:56.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3275-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:08:57.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3382" for this suite. STEP: Destroying namespace "webhook-3382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.184 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":77,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:08:57.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:08:58.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de" in namespace "projected-5180" to be "success or failure" Aug 20 23:08:58.103: INFO: Pod "downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de": Phase="Pending", Reason="", readiness=false. Elapsed: 7.205878ms Aug 20 23:09:00.107: INFO: Pod "downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011449152s Aug 20 23:09:02.117: INFO: Pod "downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021266546s STEP: Saw pod success Aug 20 23:09:02.117: INFO: Pod "downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de" satisfied condition "success or failure" Aug 20 23:09:02.119: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de container client-container: STEP: delete the pod Aug 20 23:09:02.181: INFO: Waiting for pod downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de to disappear Aug 20 23:09:02.192: INFO: Pod downwardapi-volume-0b29cae0-ef4f-4730-9ca9-5e56778182de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:02.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5180" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1395,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:02.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:09:02.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673" in namespace "downward-api-1211" to be "success or failure" Aug 20 23:09:02.445: INFO: Pod "downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673": Phase="Pending", Reason="", readiness=false. Elapsed: 10.415917ms Aug 20 23:09:04.472: INFO: Pod "downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037887458s Aug 20 23:09:06.476: INFO: Pod "downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041414975s STEP: Saw pod success Aug 20 23:09:06.476: INFO: Pod "downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673" satisfied condition "success or failure" Aug 20 23:09:06.479: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673 container client-container: STEP: delete the pod Aug 20 23:09:06.574: INFO: Waiting for pod downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673 to disappear Aug 20 23:09:06.581: INFO: Pod downwardapi-volume-b46933e2-3ad1-4d11-9a84-9954a27fe673 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:06.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1211" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1396,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:06.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 20 23:09:06.626: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:12.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-600" for this suite. • [SLOW TEST:6.110 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":80,"skipped":1401,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:12.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-afa10fe4-c3a7-4001-b8ba-d9959d1e684e STEP: Creating a pod to test consume secrets Aug 20 23:09:12.822: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6" in namespace "projected-4652" to be "success or failure" Aug 20 23:09:12.826: INFO: Pod "pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.996399ms Aug 20 23:09:14.830: INFO: Pod "pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008109131s Aug 20 23:09:16.834: INFO: Pod "pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011385335s STEP: Saw pod success Aug 20 23:09:16.834: INFO: Pod "pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6" satisfied condition "success or failure" Aug 20 23:09:16.836: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6 container projected-secret-volume-test: STEP: delete the pod Aug 20 23:09:16.871: INFO: Waiting for pod pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6 to disappear Aug 20 23:09:16.894: INFO: Pod pod-projected-secrets-3e58de10-3461-4917-b0f0-bd0fed7b8ef6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:16.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4652" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1408,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:16.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7220 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7220 I0820 23:09:17.050357 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7220, replica count: 2 I0820 23:09:20.100892 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 23:09:23.101140 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 23:09:23.101: INFO: Creating new exec pod Aug 20 23:09:28.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7220 execpodddgtv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 20 23:09:28.372: INFO: stderr: "I0820 23:09:28.252290 885 log.go:172] (0xc0001113f0) (0xc000a74000) Create stream\nI0820 23:09:28.252342 885 log.go:172] (0xc0001113f0) (0xc000a74000) Stream added, broadcasting: 1\nI0820 23:09:28.254807 885 log.go:172] (0xc0001113f0) Reply frame received for 1\nI0820 23:09:28.254864 885 log.go:172] (0xc0001113f0) (0xc0007259a0) Create stream\nI0820 23:09:28.254887 885 log.go:172] (0xc0001113f0) (0xc0007259a0) Stream added, broadcasting: 3\nI0820 23:09:28.256002 885 log.go:172] (0xc0001113f0) Reply frame received for 3\nI0820 23:09:28.256036 885 log.go:172] (0xc0001113f0) (0xc000a740a0) Create stream\nI0820 23:09:28.256046 885 log.go:172] (0xc0001113f0) (0xc000a740a0) Stream added, broadcasting: 5\nI0820 23:09:28.257108 885 log.go:172] (0xc0001113f0) Reply frame received for 5\nI0820 23:09:28.347053 885 log.go:172] (0xc0001113f0) Data frame received for 5\nI0820 23:09:28.347099 885 log.go:172] (0xc000a740a0) (5) Data frame handling\nI0820 23:09:28.347140 885 log.go:172] (0xc000a740a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0820 23:09:28.362406 885 log.go:172] (0xc0001113f0) Data frame received for 5\nI0820 23:09:28.362444 885 log.go:172] (0xc000a740a0) (5) Data frame handling\nI0820 23:09:28.362466 885 log.go:172] (0xc000a740a0) (5) Data frame sent\nI0820 23:09:28.362478 885 log.go:172] (0xc0001113f0) Data frame received for 5\nI0820 23:09:28.362486 885 log.go:172] (0xc000a740a0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0820 23:09:28.362657 885 log.go:172] (0xc0001113f0) Data frame received for 3\nI0820 23:09:28.362695 885 log.go:172] (0xc0007259a0) (3) Data frame handling\nI0820 23:09:28.364522 885 log.go:172] (0xc0001113f0) Data frame received for 1\nI0820 23:09:28.364571 885 log.go:172] (0xc000a74000) (1) Data frame handling\nI0820 23:09:28.364607 885 log.go:172] (0xc000a74000) (1) Data frame sent\nI0820 23:09:28.364693 885 log.go:172] (0xc0001113f0) (0xc000a74000) Stream removed, broadcasting: 1\nI0820 23:09:28.364894 885 log.go:172] (0xc0001113f0) Go away received\nI0820 23:09:28.365399 885 log.go:172] (0xc0001113f0) (0xc000a74000) Stream removed, broadcasting: 1\nI0820 23:09:28.365439 885 log.go:172] (0xc0001113f0) (0xc0007259a0) Stream removed, broadcasting: 3\nI0820 23:09:28.365458 885 log.go:172] (0xc0001113f0) (0xc000a740a0) Stream removed, broadcasting: 5\n" Aug 20 23:09:28.372: INFO: stdout: "" Aug 20 23:09:28.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7220 execpodddgtv -- /bin/sh -x -c nc -zv -t -w 2 10.99.62.181 80' Aug 20 23:09:28.590: INFO: stderr: "I0820 23:09:28.501362 906 log.go:172] (0xc0003c04d0) (0xc0008bc280) Create stream\nI0820 23:09:28.501430 906 log.go:172] (0xc0003c04d0) (0xc0008bc280) Stream added, broadcasting: 1\nI0820 23:09:28.503402 906 log.go:172] (0xc0003c04d0) Reply frame received for 1\nI0820 23:09:28.503436 906 log.go:172] (0xc0003c04d0) (0xc00036c780) Create stream\nI0820 23:09:28.503443 906 log.go:172] (0xc0003c04d0) (0xc00036c780) Stream added, broadcasting: 3\nI0820 23:09:28.504555 906 log.go:172] (0xc0003c04d0) Reply frame received for 3\nI0820 23:09:28.504613 906 log.go:172] (0xc0003c04d0) (0xc0006fa000) Create stream\nI0820 23:09:28.504650 906 log.go:172] (0xc0003c04d0) (0xc0006fa000) Stream added, broadcasting: 5\nI0820 23:09:28.506181 906 log.go:172] (0xc0003c04d0) Reply frame received for 5\nI0820 23:09:28.576190 906 log.go:172] (0xc0003c04d0) Data frame received for 3\nI0820 23:09:28.576262 906 log.go:172] (0xc0003c04d0) Data frame received for 5\nI0820 23:09:28.576318 906 log.go:172] (0xc0006fa000) (5) Data frame handling\nI0820 23:09:28.576345 906 log.go:172] (0xc0006fa000) (5) Data frame sent\nI0820 23:09:28.576367 906 log.go:172] (0xc0003c04d0) Data frame received for 5\nI0820 23:09:28.576388 906 log.go:172] (0xc0006fa000) (5) Data frame handling\nI0820 23:09:28.576425 906 log.go:172] (0xc00036c780) (3) Data frame handling\n+ nc -zv -t -w 2 10.99.62.181 80\nConnection to 10.99.62.181 80 port [tcp/http] succeeded!\nI0820 23:09:28.577974 906 log.go:172] (0xc0003c04d0) Data frame received for 1\nI0820 23:09:28.578070 906 log.go:172] (0xc0008bc280) (1) Data frame handling\nI0820 23:09:28.578107 906 log.go:172] (0xc0008bc280) (1) Data frame sent\nI0820 23:09:28.578136 906 log.go:172] (0xc0003c04d0) (0xc0008bc280) Stream removed, broadcasting: 1\nI0820 23:09:28.578196 906 log.go:172] (0xc0003c04d0) Go away received\nI0820 23:09:28.578695 906 log.go:172] (0xc0003c04d0) (0xc0008bc280) Stream removed, broadcasting: 1\nI0820 23:09:28.578720 906 log.go:172] (0xc0003c04d0) (0xc00036c780) Stream removed, broadcasting: 3\nI0820 23:09:28.578731 906 log.go:172] (0xc0003c04d0) (0xc0006fa000) Stream removed, broadcasting: 5\n" Aug 20 23:09:28.590: INFO: stdout: "" Aug 20 23:09:28.590: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7220" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.713 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":82,"skipped":1426,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:28.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Aug 20 23:09:28.697: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:45.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7672" for this suite. • [SLOW TEST:16.513 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":83,"skipped":1440,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:45.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 20 23:09:45.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3971 /api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-resource-version 8319411a-0053-4ac9-9313-206c7e4c8530 1948451 0 2020-08-20 23:09:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 23:09:45.233: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3971 /api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-resource-version 8319411a-0053-4ac9-9313-206c7e4c8530 1948452 0 2020-08-20 23:09:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:45.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3971" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":84,"skipped":1452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:45.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5580.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5580.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5580.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5580.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5580.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:09:51.374: INFO: DNS probes using dns-5580/dns-test-56ec115f-34eb-456a-a705-970e1a472e53 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:51.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5580" for this suite. • [SLOW TEST:6.191 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":85,"skipped":1480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:51.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:09:51.490: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:55.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4742" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1531,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:55.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Aug 20 23:09:55.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 20 23:09:55.709: INFO: stderr: "" Aug 20 23:09:55.709: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:55.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7033" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":87,"skipped":1555,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:55.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:09:59.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3084" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1562,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:09:59.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-543/secret-test-92b8ddc5-3a99-4740-bc9d-379831b0e75c STEP: Creating a pod to test consume secrets Aug 20 23:09:59.960: INFO: Waiting up to 5m0s for pod "pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9" in namespace "secrets-543" to be "success or failure" Aug 20 23:09:59.967: INFO: Pod "pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.222329ms Aug 20 23:10:02.022: INFO: Pod "pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062169324s Aug 20 23:10:04.026: INFO: Pod "pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066228315s STEP: Saw pod success Aug 20 23:10:04.026: INFO: Pod "pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9" satisfied condition "success or failure" Aug 20 23:10:04.029: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9 container env-test: STEP: delete the pod Aug 20 23:10:04.052: INFO: Waiting for pod pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9 to disappear Aug 20 23:10:04.057: INFO: Pod pod-configmaps-afcefeb6-1723-4c30-8660-898e6c93a4a9 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:04.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-543" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:04.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-48981ec8-b356-4779-90fe-085202d06f6f STEP: Creating a pod to test consume configMaps Aug 20 23:10:04.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264" in namespace "projected-7731" to be "success or failure" Aug 20 23:10:04.177: INFO: Pod "pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264": Phase="Pending", Reason="", readiness=false. Elapsed: 15.479611ms Aug 20 23:10:06.180: INFO: Pod "pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018924269s Aug 20 23:10:08.184: INFO: Pod "pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022838074s STEP: Saw pod success Aug 20 23:10:08.184: INFO: Pod "pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264" satisfied condition "success or failure" Aug 20 23:10:08.187: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264 container projected-configmap-volume-test: STEP: delete the pod Aug 20 23:10:08.273: INFO: Waiting for pod pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264 to disappear Aug 20 23:10:08.333: INFO: Pod pod-projected-configmaps-1a34d0be-9b1b-4e6e-ad7b-1a5395a1b264 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:08.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7731" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:08.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 20 23:10:08.405: INFO: Waiting up to 5m0s for pod "pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5" in namespace "emptydir-5817" to be "success or failure" Aug 20 23:10:08.409: INFO: Pod "pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324624ms Aug 20 23:10:10.413: INFO: Pod "pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008211732s Aug 20 23:10:12.417: INFO: Pod "pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012110169s STEP: Saw pod success Aug 20 23:10:12.417: INFO: Pod "pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5" satisfied condition "success or failure" Aug 20 23:10:12.419: INFO: Trying to get logs from node jerma-worker pod pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5 container test-container: STEP: delete the pod Aug 20 23:10:12.457: INFO: Waiting for pod pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5 to disappear Aug 20 23:10:12.469: INFO: Pod pod-c7ecba79-0ff6-4b8a-bb98-f19ce16110e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:12.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5817" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:12.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Aug 20 23:10:12.603: INFO: Waiting up to 5m0s for pod "pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0" in namespace "emptydir-1151" to be "success or failure" Aug 20 23:10:12.607: INFO: Pod "pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575184ms Aug 20 23:10:14.619: INFO: Pod "pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015626606s Aug 20 23:10:16.632: INFO: Pod "pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0286794s STEP: Saw pod success Aug 20 23:10:16.632: INFO: Pod "pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0" satisfied condition "success or failure" Aug 20 23:10:16.635: INFO: Trying to get logs from node jerma-worker pod pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0 container test-container: STEP: delete the pod Aug 20 23:10:16.662: INFO: Waiting for pod pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0 to disappear Aug 20 23:10:16.692: INFO: Pod pod-1408f02c-c64e-4eef-9fe4-e4c5cef36bd0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:16.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1151" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1693,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:16.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0820 23:10:28.896969 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 23:10:28.897: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:28.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4630" for this suite. • [SLOW TEST:12.202 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":93,"skipped":1708,"failed":0} [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:28.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:10:29.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a" in namespace "projected-6175" to be "success or failure" Aug 20 23:10:29.087: INFO: Pod "downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.106868ms Aug 20 23:10:31.091: INFO: Pod "downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037592331s Aug 20 23:10:33.096: INFO: Pod "downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042697573s STEP: Saw pod success Aug 20 23:10:33.096: INFO: Pod "downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a" satisfied condition "success or failure" Aug 20 23:10:33.099: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a container client-container: STEP: delete the pod Aug 20 23:10:33.155: INFO: Waiting for pod downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a to disappear Aug 20 23:10:33.190: INFO: Pod downwardapi-volume-e440fad4-9f4e-40e6-8e90-67bc82afc10a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:10:33.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6175" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1708,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:10:33.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6285.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6285.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6285.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:10:41.639: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.645: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.650: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.653: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.664: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.668: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.670: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:41.674: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:10:46.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.684: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.687: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.699: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.709: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.711: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.714: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.717: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:46.769: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:10:51.718: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.730: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.733: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.735: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.742: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.747: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.749: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:51.754: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:10:56.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.683: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.687: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.703: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.705: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.708: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:10:56.754: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:11:01.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.683: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.686: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.699: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.702: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.705: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.708: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:01.714: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:11:06.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.682: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.684: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.693: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.788: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.791: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.795: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.798: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local from pod dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b: the server could not find the requested resource (get pods dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b) Aug 20 23:11:06.804: INFO: Lookups using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6285.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6285.svc.cluster.local jessie_udp@dns-test-service-2.dns-6285.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6285.svc.cluster.local] Aug 20 23:11:11.715: INFO: DNS probes using dns-6285/dns-test-1d6fa8a2-24b5-4d5e-815a-61c3eebc396b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:11:12.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6285" for this suite. • [SLOW TEST:39.284 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":95,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:11:12.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:11:28.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7724" for this suite. • [SLOW TEST:16.517 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":96,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:11:28.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:11:35.095: INFO: DNS probes using dns-test-3ba39d36-3bf9-443b-b46c-a583048a7880 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:11:43.235: INFO: File wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:43.238: INFO: File jessie_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:43.238: INFO: Lookups using dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 failed for: [wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local jessie_udp@dns-test-service-3.dns-629.svc.cluster.local] Aug 20 23:11:48.243: INFO: File wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:48.246: INFO: File jessie_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:48.246: INFO: Lookups using dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 failed for: [wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local jessie_udp@dns-test-service-3.dns-629.svc.cluster.local] Aug 20 23:11:53.243: INFO: File wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:53.247: INFO: File jessie_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:53.247: INFO: Lookups using dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 failed for: [wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local jessie_udp@dns-test-service-3.dns-629.svc.cluster.local] Aug 20 23:11:58.253: INFO: File wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:58.299: INFO: File jessie_udp@dns-test-service-3.dns-629.svc.cluster.local from pod dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 20 23:11:58.299: INFO: Lookups using dns-629/dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 failed for: [wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local jessie_udp@dns-test-service-3.dns-629.svc.cluster.local] Aug 20 23:12:03.253: INFO: DNS probes using dns-test-6cf0c3e6-e0d9-4cf9-a991-f9999013b451 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-629.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-629.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 20 23:12:09.985: INFO: DNS probes using dns-test-fc90534f-808b-4819-aa6c-4ed5c4d53c9c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:10.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-629" for this suite. • [SLOW TEST:41.061 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":97,"skipped":1768,"failed":0} [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:10.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 20 23:12:10.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8616' Aug 20 23:12:15.081: INFO: stderr: "" Aug 20 23:12:15.081: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 20 23:12:15.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8616' Aug 20 23:12:21.745: INFO: stderr: "" Aug 20 23:12:21.745: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:21.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8616" for this suite. • [SLOW TEST:11.693 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":98,"skipped":1768,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:21.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-bf498235-20ca-441b-a560-1bb6aee58c37 STEP: Creating a pod to test consume secrets Aug 20 23:12:21.835: INFO: Waiting up to 5m0s for pod "pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24" in namespace "secrets-2698" to be "success or failure" Aug 20 23:12:21.867: INFO: Pod "pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24": Phase="Pending", Reason="", readiness=false. Elapsed: 32.383397ms Aug 20 23:12:23.871: INFO: Pod "pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036204701s Aug 20 23:12:25.875: INFO: Pod "pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039871354s STEP: Saw pod success Aug 20 23:12:25.875: INFO: Pod "pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24" satisfied condition "success or failure" Aug 20 23:12:25.877: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24 container secret-volume-test: STEP: delete the pod Aug 20 23:12:26.025: INFO: Waiting for pod pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24 to disappear Aug 20 23:12:26.042: INFO: Pod pod-secrets-83a1557c-bda2-49ed-a34c-29b4e4167e24 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:26.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2698" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1768,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:26.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b85dd9fd-f3fb-49a7-9c74-1ebf129871db STEP: Creating a pod to test consume configMaps Aug 20 23:12:26.164: INFO: Waiting up to 5m0s for pod "pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8" in namespace "configmap-2582" to be "success or failure" Aug 20 23:12:26.168: INFO: Pod "pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218786ms Aug 20 23:12:28.172: INFO: Pod "pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008561888s Aug 20 23:12:30.177: INFO: Pod "pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012826548s STEP: Saw pod success Aug 20 23:12:30.177: INFO: Pod "pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8" satisfied condition "success or failure" Aug 20 23:12:30.180: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8 container configmap-volume-test: STEP: delete the pod Aug 20 23:12:30.212: INFO: Waiting for pod pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8 to disappear Aug 20 23:12:30.216: INFO: Pod pod-configmaps-e060107f-7695-4a6a-9c27-362fe7293db8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:30.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2582" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1770,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:30.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:12:30.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:12:32.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561950, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733561950, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:12:35.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 20 23:12:40.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7359 to-be-attached-pod -i -c=container1' Aug 20 23:12:40.154: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:40.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7359" for this suite. STEP: Destroying namespace "webhook-7359-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":101,"skipped":1772,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:40.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:12:40.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa" in namespace "downward-api-1789" to be "success or failure" Aug 20 23:12:40.393: INFO: Pod "downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.402419ms Aug 20 23:12:42.398: INFO: Pod "downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016960112s Aug 20 23:12:44.402: INFO: Pod "downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021248627s STEP: Saw pod success Aug 20 23:12:44.402: INFO: Pod "downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa" satisfied condition "success or failure" Aug 20 23:12:44.406: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa container client-container: STEP: delete the pod Aug 20 23:12:44.486: INFO: Waiting for pod downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa to disappear Aug 20 23:12:44.505: INFO: Pod downwardapi-volume-6c8bac89-3912-470b-96c8-08e51b8b63aa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:44.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1789" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1785,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:44.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 20 23:12:44.580: INFO: Waiting up to 5m0s for pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06" in namespace "emptydir-7479" to be "success or failure" Aug 20 23:12:44.616: INFO: Pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06": Phase="Pending", Reason="", readiness=false. Elapsed: 36.73237ms Aug 20 23:12:46.621: INFO: Pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041197394s Aug 20 23:12:48.625: INFO: Pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06": Phase="Running", Reason="", readiness=true. Elapsed: 4.045639977s Aug 20 23:12:50.630: INFO: Pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04995521s STEP: Saw pod success Aug 20 23:12:50.630: INFO: Pod "pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06" satisfied condition "success or failure" Aug 20 23:12:50.633: INFO: Trying to get logs from node jerma-worker pod pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06 container test-container: STEP: delete the pod Aug 20 23:12:50.668: INFO: Waiting for pod pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06 to disappear Aug 20 23:12:50.672: INFO: Pod pod-e5d4e6ce-a074-42ee-afb1-dc643e548a06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:50.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7479" for this suite. • [SLOW TEST:6.166 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1802,"failed":0} [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:50.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 20 23:12:50.765: INFO: Waiting up to 5m0s for pod "downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef" in namespace "downward-api-8171" to be "success or failure" Aug 20 23:12:50.780: INFO: Pod "downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 15.162611ms Aug 20 23:12:52.785: INFO: Pod "downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019340652s Aug 20 23:12:54.789: INFO: Pod "downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023427288s STEP: Saw pod success Aug 20 23:12:54.789: INFO: Pod "downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef" satisfied condition "success or failure" Aug 20 23:12:54.792: INFO: Trying to get logs from node jerma-worker2 pod downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef container dapi-container: STEP: delete the pod Aug 20 23:12:54.812: INFO: Waiting for pod downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef to disappear Aug 20 23:12:54.816: INFO: Pod downward-api-85762f36-41b3-4477-980e-0f0edf92a6ef no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:12:54.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8171" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1802,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:12:54.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 20 23:12:54.864: INFO: >>> kubeConfig: /root/.kube/config Aug 20 23:12:56.796: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:13:07.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1303" for this suite. • [SLOW TEST:12.433 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":105,"skipped":1810,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:13:07.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-crhk STEP: Creating a pod to test atomic-volume-subpath Aug 20 23:13:07.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-crhk" in namespace "subpath-6472" to be "success or failure" Aug 20 23:13:07.475: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.571214ms Aug 20 23:13:09.478: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006927096s Aug 20 23:13:11.482: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 4.010590394s Aug 20 23:13:13.486: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 6.014744548s Aug 20 23:13:15.489: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 8.017867776s Aug 20 23:13:17.493: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 10.021502647s Aug 20 23:13:19.497: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 12.025927281s Aug 20 23:13:21.501: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 14.029586115s Aug 20 23:13:23.585: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 16.113437241s Aug 20 23:13:25.588: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 18.11670614s Aug 20 23:13:27.592: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 20.120486411s Aug 20 23:13:29.596: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 22.124420736s Aug 20 23:13:31.599: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Running", Reason="", readiness=true. Elapsed: 24.127875261s Aug 20 23:13:33.603: INFO: Pod "pod-subpath-test-configmap-crhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.131151081s STEP: Saw pod success Aug 20 23:13:33.603: INFO: Pod "pod-subpath-test-configmap-crhk" satisfied condition "success or failure" Aug 20 23:13:33.605: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-crhk container test-container-subpath-configmap-crhk: STEP: delete the pod Aug 20 23:13:33.642: INFO: Waiting for pod pod-subpath-test-configmap-crhk to disappear Aug 20 23:13:33.650: INFO: Pod pod-subpath-test-configmap-crhk no longer exists STEP: Deleting pod pod-subpath-test-configmap-crhk Aug 20 23:13:33.650: INFO: Deleting pod "pod-subpath-test-configmap-crhk" in namespace "subpath-6472" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:13:33.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6472" for this suite. • [SLOW TEST:26.401 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":106,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:13:33.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 20 23:13:33.722: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:33.727: INFO: Number of nodes with available pods: 0 Aug 20 23:13:33.727: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:34.733: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:34.736: INFO: Number of nodes with available pods: 0 Aug 20 23:13:34.736: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:35.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:35.821: INFO: Number of nodes with available pods: 0 Aug 20 23:13:35.821: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:36.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:36.753: INFO: Number of nodes with available pods: 0 Aug 20 23:13:36.753: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:37.906: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:37.911: INFO: Number of nodes with available pods: 0 Aug 20 23:13:37.911: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:38.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:38.797: INFO: Number of nodes with available pods: 1 Aug 20 23:13:38.797: INFO: Node jerma-worker is running more than one daemon pod Aug 20 23:13:39.731: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:39.733: INFO: Number of nodes with available pods: 2 Aug 20 23:13:39.733: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 20 23:13:39.758: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 23:13:39.904: INFO: Number of nodes with available pods: 2 Aug 20 23:13:39.904: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3874, will wait for the garbage collector to delete the pods Aug 20 23:13:41.009: INFO: Deleting DaemonSet.extensions daemon-set took: 6.540793ms Aug 20 23:13:41.309: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235197ms Aug 20 23:13:44.812: INFO: Number of nodes with available pods: 0 Aug 20 23:13:44.813: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 23:13:44.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3874/daemonsets","resourceVersion":"1950072"},"items":null} Aug 20 23:13:44.818: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3874/pods","resourceVersion":"1950072"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:13:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3874" for this suite. • [SLOW TEST:11.176 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":107,"skipped":1844,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:13:44.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0820 23:13:46.009470 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 23:13:46.009: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:13:46.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9266" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":108,"skipped":1865,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:13:46.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:13:46.621: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:13:48.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 23:13:50.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562026, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:13:53.702: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:14:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6763" for this suite. STEP: Destroying namespace "webhook-6763-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.105 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":109,"skipped":1869,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:14:06.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3643 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 20 23:14:06.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 20 23:14:34.507: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.12 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:14:34.507: INFO: >>> kubeConfig: /root/.kube/config I0820 23:14:34.539163 6 log.go:172] (0xc0021bc630) (0xc001d05860) Create stream I0820 23:14:34.539192 6 log.go:172] (0xc0021bc630) (0xc001d05860) Stream added, broadcasting: 1 I0820 23:14:34.541181 6 log.go:172] (0xc0021bc630) Reply frame received for 1 I0820 23:14:34.541236 6 log.go:172] (0xc0021bc630) (0xc0011ca460) Create stream I0820 23:14:34.541251 6 log.go:172] (0xc0021bc630) (0xc0011ca460) Stream added, broadcasting: 3 I0820 23:14:34.542487 6 log.go:172] (0xc0021bc630) Reply frame received for 3 I0820 23:14:34.542519 6 log.go:172] (0xc0021bc630) (0xc0011ca780) Create stream I0820 23:14:34.542529 6 log.go:172] (0xc0021bc630) (0xc0011ca780) Stream added, broadcasting: 5 I0820 23:14:34.543315 6 log.go:172] (0xc0021bc630) Reply frame received for 5 I0820 23:14:35.634458 6 log.go:172] (0xc0021bc630) Data frame received for 3 I0820 23:14:35.634500 6 log.go:172] (0xc0011ca460) (3) Data frame handling I0820 23:14:35.634526 6 log.go:172] (0xc0011ca460) (3) Data frame sent I0820 23:14:35.634698 6 log.go:172] (0xc0021bc630) Data frame received for 5 I0820 23:14:35.634749 6 log.go:172] (0xc0011ca780) (5) Data frame handling I0820 23:14:35.634790 6 log.go:172] (0xc0021bc630) Data frame received for 3 I0820 23:14:35.634814 6 log.go:172] (0xc0011ca460) (3) Data frame handling I0820 23:14:35.636910 6 log.go:172] (0xc0021bc630) Data frame received for 1 I0820 23:14:35.636959 6 log.go:172] (0xc001d05860) (1) Data frame handling I0820 23:14:35.636985 6 log.go:172] (0xc001d05860) (1) Data frame sent I0820 23:14:35.637011 6 log.go:172] (0xc0021bc630) (0xc001d05860) Stream removed, broadcasting: 1 I0820 23:14:35.637043 6 log.go:172] (0xc0021bc630) Go away received I0820 23:14:35.637156 6 log.go:172] (0xc0021bc630) (0xc001d05860) Stream removed, broadcasting: 1 I0820 23:14:35.637189 6 log.go:172] (0xc0021bc630) (0xc0011ca460) Stream removed, broadcasting: 3 I0820 23:14:35.637209 6 log.go:172] (0xc0021bc630) (0xc0011ca780) Stream removed, broadcasting: 5 Aug 20 23:14:35.637: INFO: Found all expected endpoints: [netserver-0] Aug 20 23:14:35.640: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:14:35.640: INFO: >>> kubeConfig: /root/.kube/config I0820 23:14:35.668257 6 log.go:172] (0xc0021bcc60) (0xc0028f4000) Create stream I0820 23:14:35.668293 6 log.go:172] (0xc0021bcc60) (0xc0028f4000) Stream added, broadcasting: 1 I0820 23:14:35.670190 6 log.go:172] (0xc0021bcc60) Reply frame received for 1 I0820 23:14:35.670243 6 log.go:172] (0xc0021bcc60) (0xc001d2db80) Create stream I0820 23:14:35.670264 6 log.go:172] (0xc0021bcc60) (0xc001d2db80) Stream added, broadcasting: 3 I0820 23:14:35.671252 6 log.go:172] (0xc0021bcc60) Reply frame received for 3 I0820 23:14:35.671289 6 log.go:172] (0xc0021bcc60) (0xc0028f40a0) Create stream I0820 23:14:35.671304 6 log.go:172] (0xc0021bcc60) (0xc0028f40a0) Stream added, broadcasting: 5 I0820 23:14:35.672141 6 log.go:172] (0xc0021bcc60) Reply frame received for 5 I0820 23:14:36.758303 6 log.go:172] (0xc0021bcc60) Data frame received for 3 I0820 23:14:36.758346 6 log.go:172] (0xc001d2db80) (3) Data frame handling I0820 23:14:36.758385 6 log.go:172] (0xc001d2db80) (3) Data frame sent I0820 23:14:36.758416 6 log.go:172] (0xc0021bcc60) Data frame received for 3 I0820 23:14:36.758437 6 log.go:172] (0xc001d2db80) (3) Data frame handling I0820 23:14:36.758897 6 log.go:172] (0xc0021bcc60) Data frame received for 5 I0820 23:14:36.758928 6 log.go:172] (0xc0028f40a0) (5) Data frame handling I0820 23:14:36.760612 6 log.go:172] (0xc0021bcc60) Data frame received for 1 I0820 23:14:36.760640 6 log.go:172] (0xc0028f4000) (1) Data frame handling I0820 23:14:36.760659 6 log.go:172] (0xc0028f4000) (1) Data frame sent I0820 23:14:36.760681 6 log.go:172] (0xc0021bcc60) (0xc0028f4000) Stream removed, broadcasting: 1 I0820 23:14:36.760963 6 log.go:172] (0xc0021bcc60) (0xc0028f4000) Stream removed, broadcasting: 1 I0820 23:14:36.760995 6 log.go:172] (0xc0021bcc60) (0xc001d2db80) Stream removed, broadcasting: 3 I0820 23:14:36.761007 6 log.go:172] (0xc0021bcc60) (0xc0028f40a0) Stream removed, broadcasting: 5 Aug 20 23:14:36.761: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:14:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0820 23:14:36.761349 6 log.go:172] (0xc0021bcc60) Go away received STEP: Destroying namespace "pod-network-test-3643" for this suite. • [SLOW TEST:30.646 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:14:36.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:14:47.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5652" for this suite. • [SLOW TEST:11.125 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":111,"skipped":1925,"failed":0} [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:14:47.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 20 23:14:47.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1205' Aug 20 23:14:48.274: INFO: stderr: "" Aug 20 23:14:48.274: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 20 23:14:48.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1205' Aug 20 23:14:48.392: INFO: stderr: "" Aug 20 23:14:48.392: INFO: stdout: "update-demo-nautilus-28rfv update-demo-nautilus-sj87f " Aug 20 23:14:48.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28rfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1205' Aug 20 23:14:48.521: INFO: stderr: "" Aug 20 23:14:48.521: INFO: stdout: "" Aug 20 23:14:48.521: INFO: update-demo-nautilus-28rfv is created but not running Aug 20 23:14:53.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1205' Aug 20 23:14:53.629: INFO: stderr: "" Aug 20 23:14:53.629: INFO: stdout: "update-demo-nautilus-28rfv update-demo-nautilus-sj87f " Aug 20 23:14:53.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28rfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1205' Aug 20 23:14:53.728: INFO: stderr: "" Aug 20 23:14:53.728: INFO: stdout: "true" Aug 20 23:14:53.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28rfv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1205' Aug 20 23:14:53.820: INFO: stderr: "" Aug 20 23:14:53.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:14:53.821: INFO: validating pod update-demo-nautilus-28rfv Aug 20 23:14:53.825: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:14:53.825: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:14:53.825: INFO: update-demo-nautilus-28rfv is verified up and running Aug 20 23:14:53.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sj87f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1205' Aug 20 23:14:53.930: INFO: stderr: "" Aug 20 23:14:53.930: INFO: stdout: "true" Aug 20 23:14:53.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sj87f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1205' Aug 20 23:14:54.020: INFO: stderr: "" Aug 20 23:14:54.021: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:14:54.021: INFO: validating pod update-demo-nautilus-sj87f Aug 20 23:14:54.024: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:14:54.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:14:54.024: INFO: update-demo-nautilus-sj87f is verified up and running STEP: using delete to clean up resources Aug 20 23:14:54.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1205' Aug 20 23:14:54.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:14:54.125: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 20 23:14:54.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1205' Aug 20 23:14:54.246: INFO: stderr: "No resources found in kubectl-1205 namespace.\n" Aug 20 23:14:54.246: INFO: stdout: "" Aug 20 23:14:54.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1205 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 23:14:54.345: INFO: stderr: "" Aug 20 23:14:54.345: INFO: stdout: "update-demo-nautilus-28rfv\nupdate-demo-nautilus-sj87f\n" Aug 20 23:14:54.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1205' Aug 20 23:14:54.946: INFO: stderr: "No resources found in kubectl-1205 namespace.\n" Aug 20 23:14:54.946: INFO: stdout: "" Aug 20 23:14:54.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1205 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 23:14:55.037: INFO: stderr: "" Aug 20 23:14:55.037: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:14:55.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1205" for this suite. • [SLOW TEST:7.151 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":112,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:14:55.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-e08b76ab-592b-4ea1-9bb2-e5e7a21a5ec4 STEP: Creating configMap with name cm-test-opt-upd-b83fa2e4-5ca9-4e7a-808b-732bdbc3c849 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e08b76ab-592b-4ea1-9bb2-e5e7a21a5ec4 STEP: Updating configmap cm-test-opt-upd-b83fa2e4-5ca9-4e7a-808b-732bdbc3c849 STEP: Creating configMap with name cm-test-opt-create-81c9157b-3daf-481e-8553-d93160be269e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:05.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9307" for this suite. • [SLOW TEST:70.948 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1998,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:05.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 20 23:16:06.099: INFO: Waiting up to 5m0s for pod "pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1" in namespace "emptydir-8266" to be "success or failure" Aug 20 23:16:06.187: INFO: Pod "pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 87.81269ms Aug 20 23:16:08.191: INFO: Pod "pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091914172s Aug 20 23:16:10.195: INFO: Pod "pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095964026s STEP: Saw pod success Aug 20 23:16:10.195: INFO: Pod "pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1" satisfied condition "success or failure" Aug 20 23:16:10.198: INFO: Trying to get logs from node jerma-worker2 pod pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1 container test-container: STEP: delete the pod Aug 20 23:16:10.236: INFO: Waiting for pod pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1 to disappear Aug 20 23:16:10.438: INFO: Pod pod-d47505ac-557e-44b9-8747-d1fc7f61f4f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:10.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8266" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":2000,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:10.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:23.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-493" for this suite. • [SLOW TEST:13.248 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":115,"skipped":2010,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:23.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:16:23.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873" in namespace "projected-9508" to be "success or failure" Aug 20 23:16:23.826: INFO: Pod "downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873": Phase="Pending", Reason="", readiness=false. Elapsed: 24.615195ms Aug 20 23:16:25.830: INFO: Pod "downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02866637s Aug 20 23:16:27.834: INFO: Pod "downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032292308s STEP: Saw pod success Aug 20 23:16:27.834: INFO: Pod "downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873" satisfied condition "success or failure" Aug 20 23:16:27.837: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873 container client-container: STEP: delete the pod Aug 20 23:16:27.876: INFO: Waiting for pod downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873 to disappear Aug 20 23:16:27.885: INFO: Pod downwardapi-volume-19993b30-1cd3-4f19-b176-bf45f17fb873 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:27.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9508" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2022,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:27.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-w52d STEP: Creating a pod to test atomic-volume-subpath Aug 20 23:16:27.979: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w52d" in namespace "subpath-3803" to be "success or failure" Aug 20 23:16:27.992: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.018634ms Aug 20 23:16:30.014: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035025654s Aug 20 23:16:32.018: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 4.039150866s Aug 20 23:16:34.022: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 6.042990029s Aug 20 23:16:36.026: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 8.047230694s Aug 20 23:16:38.030: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 10.051680753s Aug 20 23:16:40.034: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 12.055642399s Aug 20 23:16:42.038: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 14.059776988s Aug 20 23:16:44.041: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 16.062882163s Aug 20 23:16:46.046: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 18.067168141s Aug 20 23:16:48.068: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 20.089045651s Aug 20 23:16:50.079: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Running", Reason="", readiness=true. Elapsed: 22.100791929s Aug 20 23:16:52.084: INFO: Pod "pod-subpath-test-configmap-w52d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.10501843s STEP: Saw pod success Aug 20 23:16:52.084: INFO: Pod "pod-subpath-test-configmap-w52d" satisfied condition "success or failure" Aug 20 23:16:52.087: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-w52d container test-container-subpath-configmap-w52d: STEP: delete the pod Aug 20 23:16:52.106: INFO: Waiting for pod pod-subpath-test-configmap-w52d to disappear Aug 20 23:16:52.134: INFO: Pod pod-subpath-test-configmap-w52d no longer exists STEP: Deleting pod pod-subpath-test-configmap-w52d Aug 20 23:16:52.134: INFO: Deleting pod "pod-subpath-test-configmap-w52d" in namespace "subpath-3803" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:52.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3803" for this suite. • [SLOW TEST:24.251 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":117,"skipped":2030,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:52.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:16:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2710" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":118,"skipped":2045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:16:52.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50 [It] should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Aug 20 23:16:56.392: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Aug 20 23:17:06.489: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:17:06.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3514" for this suite. • [SLOW TEST:14.256 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":119,"skipped":2069,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:17:06.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 20 23:17:06.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9889' Aug 20 23:17:06.866: INFO: stderr: "" Aug 20 23:17:06.866: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 20 23:17:06.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:06.983: INFO: stderr: "" Aug 20 23:17:06.983: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hlrcz " Aug 20 23:17:06.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:07.074: INFO: stderr: "" Aug 20 23:17:07.074: INFO: stdout: "" Aug 20 23:17:07.074: INFO: update-demo-nautilus-h9dz4 is created but not running Aug 20 23:17:12.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:12.181: INFO: stderr: "" Aug 20 23:17:12.181: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hlrcz " Aug 20 23:17:12.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:12.267: INFO: stderr: "" Aug 20 23:17:12.267: INFO: stdout: "true" Aug 20 23:17:12.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:12.356: INFO: stderr: "" Aug 20 23:17:12.356: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:12.356: INFO: validating pod update-demo-nautilus-h9dz4 Aug 20 23:17:12.360: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:12.360: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:12.360: INFO: update-demo-nautilus-h9dz4 is verified up and running Aug 20 23:17:12.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hlrcz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:12.452: INFO: stderr: "" Aug 20 23:17:12.452: INFO: stdout: "true" Aug 20 23:17:12.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hlrcz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:12.550: INFO: stderr: "" Aug 20 23:17:12.550: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:12.550: INFO: validating pod update-demo-nautilus-hlrcz Aug 20 23:17:12.553: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:12.553: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:12.553: INFO: update-demo-nautilus-hlrcz is verified up and running STEP: scaling down the replication controller Aug 20 23:17:12.556: INFO: scanned /root for discovery docs: Aug 20 23:17:12.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9889' Aug 20 23:17:13.660: INFO: stderr: "" Aug 20 23:17:13.660: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 20 23:17:13.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:13.765: INFO: stderr: "" Aug 20 23:17:13.765: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hlrcz " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 20 23:17:18.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:18.878: INFO: stderr: "" Aug 20 23:17:18.878: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hlrcz " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 20 23:17:23.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:23.973: INFO: stderr: "" Aug 20 23:17:23.973: INFO: stdout: "update-demo-nautilus-h9dz4 " Aug 20 23:17:23.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:24.073: INFO: stderr: "" Aug 20 23:17:24.073: INFO: stdout: "true" Aug 20 23:17:24.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:24.167: INFO: stderr: "" Aug 20 23:17:24.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:24.167: INFO: validating pod update-demo-nautilus-h9dz4 Aug 20 23:17:24.170: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:24.170: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:24.170: INFO: update-demo-nautilus-h9dz4 is verified up and running STEP: scaling up the replication controller Aug 20 23:17:24.171: INFO: scanned /root for discovery docs: Aug 20 23:17:24.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9889' Aug 20 23:17:25.329: INFO: stderr: "" Aug 20 23:17:25.329: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 20 23:17:25.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:25.453: INFO: stderr: "" Aug 20 23:17:25.453: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hszgw " Aug 20 23:17:25.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:25.543: INFO: stderr: "" Aug 20 23:17:25.543: INFO: stdout: "true" Aug 20 23:17:25.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:25.694: INFO: stderr: "" Aug 20 23:17:25.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:25.694: INFO: validating pod update-demo-nautilus-h9dz4 Aug 20 23:17:25.698: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:25.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:25.698: INFO: update-demo-nautilus-h9dz4 is verified up and running Aug 20 23:17:25.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hszgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:25.803: INFO: stderr: "" Aug 20 23:17:25.803: INFO: stdout: "" Aug 20 23:17:25.803: INFO: update-demo-nautilus-hszgw is created but not running Aug 20 23:17:30.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9889' Aug 20 23:17:30.905: INFO: stderr: "" Aug 20 23:17:30.905: INFO: stdout: "update-demo-nautilus-h9dz4 update-demo-nautilus-hszgw " Aug 20 23:17:30.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:30.993: INFO: stderr: "" Aug 20 23:17:30.993: INFO: stdout: "true" Aug 20 23:17:30.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9dz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:31.093: INFO: stderr: "" Aug 20 23:17:31.093: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:31.093: INFO: validating pod update-demo-nautilus-h9dz4 Aug 20 23:17:31.096: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:31.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:31.096: INFO: update-demo-nautilus-h9dz4 is verified up and running Aug 20 23:17:31.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hszgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:31.194: INFO: stderr: "" Aug 20 23:17:31.194: INFO: stdout: "true" Aug 20 23:17:31.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hszgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9889' Aug 20 23:17:31.288: INFO: stderr: "" Aug 20 23:17:31.289: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 23:17:31.289: INFO: validating pod update-demo-nautilus-hszgw Aug 20 23:17:31.292: INFO: got data: { "image": "nautilus.jpg" } Aug 20 23:17:31.292: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 23:17:31.292: INFO: update-demo-nautilus-hszgw is verified up and running STEP: using delete to clean up resources Aug 20 23:17:31.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9889' Aug 20 23:17:31.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 23:17:31.400: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 20 23:17:31.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9889' Aug 20 23:17:31.497: INFO: stderr: "No resources found in kubectl-9889 namespace.\n" Aug 20 23:17:31.497: INFO: stdout: "" Aug 20 23:17:31.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9889 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 23:17:31.595: INFO: stderr: "" Aug 20 23:17:31.595: INFO: stdout: "update-demo-nautilus-h9dz4\nupdate-demo-nautilus-hszgw\n" Aug 20 23:17:32.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9889' Aug 20 23:17:32.207: INFO: stderr: "No resources found in kubectl-9889 namespace.\n" Aug 20 23:17:32.207: INFO: stdout: "" Aug 20 23:17:32.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9889 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 23:17:32.404: INFO: stderr: "" Aug 20 23:17:32.404: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:17:32.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9889" for this suite. • [SLOW TEST:25.904 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":120,"skipped":2076,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:17:32.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:17:48.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2177" for this suite. • [SLOW TEST:16.543 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":121,"skipped":2081,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:17:48.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4082/configmap-test-1fe748d4-0852-4d08-8106-aee02b293be7 STEP: Creating a pod to test consume configMaps Aug 20 23:17:49.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17" in namespace "configmap-4082" to be "success or failure" Aug 20 23:17:49.098: INFO: Pod "pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.648291ms Aug 20 23:17:51.102: INFO: Pod "pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01253186s Aug 20 23:17:53.106: INFO: Pod "pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016834375s STEP: Saw pod success Aug 20 23:17:53.106: INFO: Pod "pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17" satisfied condition "success or failure" Aug 20 23:17:53.109: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17 container env-test: STEP: delete the pod Aug 20 23:17:53.145: INFO: Waiting for pod pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17 to disappear Aug 20 23:17:53.188: INFO: Pod pod-configmaps-a743d558-b0b0-4aa1-b2ab-c888025a7d17 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:17:53.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4082" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2095,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:17:53.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:17:53.258: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-11471867-a9fd-48e5-8c5a-d1f074286413" in namespace "security-context-test-4473" to be "success or failure" Aug 20 23:17:53.262: INFO: Pod "busybox-readonly-false-11471867-a9fd-48e5-8c5a-d1f074286413": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985209ms Aug 20 23:17:55.266: INFO: Pod "busybox-readonly-false-11471867-a9fd-48e5-8c5a-d1f074286413": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933241s Aug 20 23:17:57.270: INFO: Pod "busybox-readonly-false-11471867-a9fd-48e5-8c5a-d1f074286413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011986885s Aug 20 23:17:57.270: INFO: Pod "busybox-readonly-false-11471867-a9fd-48e5-8c5a-d1f074286413" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:17:57.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4473" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:17:57.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 20 23:18:01.347: INFO: &Pod{ObjectMeta:{send-events-797b207f-503c-4840-986b-0747a6a5d554 events-8699 /api/v1/namespaces/events-8699/pods/send-events-797b207f-503c-4840-986b-0747a6a5d554 a84da56e-6ae3-4543-957f-dec3bfef34db 1951453 0 2020-08-20 23:17:57 +0000 UTC map[name:foo time:326322253] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mrmm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mrmm6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mrmm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:17:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:18:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:17:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.25,StartTime:2020-08-20 23:17:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:17:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://81b782f4fc2cd23ee3606bbf5675c83fca70e3fbe9ed258355018e27d4ddf11a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 20 23:18:03.352: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 20 23:18:05.356: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:18:05.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8699" for this suite. • [SLOW TEST:8.111 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":124,"skipped":2147,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:18:05.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:18:06.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:18:08.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562286, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562286, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562286, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562286, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:18:11.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:18:11.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5159" for this suite. STEP: Destroying namespace "webhook-5159-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.123 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":125,"skipped":2152,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:18:11.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5069 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 20 23:18:11.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 20 23:18:37.750: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.23:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5069 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:18:37.750: INFO: >>> kubeConfig: /root/.kube/config I0820 23:18:37.783076 6 log.go:172] (0xc0016b4580) (0xc001d2d860) Create stream I0820 23:18:37.783115 6 log.go:172] (0xc0016b4580) (0xc001d2d860) Stream added, broadcasting: 1 I0820 23:18:37.785263 6 log.go:172] (0xc0016b4580) Reply frame received for 1 I0820 23:18:37.785305 6 log.go:172] (0xc0016b4580) (0xc002ae8be0) Create stream I0820 23:18:37.785319 6 log.go:172] (0xc0016b4580) (0xc002ae8be0) Stream added, broadcasting: 3 I0820 23:18:37.786227 6 log.go:172] (0xc0016b4580) Reply frame received for 3 I0820 23:18:37.786254 6 log.go:172] (0xc0016b4580) (0xc002ae8c80) Create stream I0820 23:18:37.786262 6 log.go:172] (0xc0016b4580) (0xc002ae8c80) Stream added, broadcasting: 5 I0820 23:18:37.787114 6 log.go:172] (0xc0016b4580) Reply frame received for 5 I0820 23:18:37.838632 6 log.go:172] (0xc0016b4580) Data frame received for 5 I0820 23:18:37.838672 6 log.go:172] (0xc002ae8c80) (5) Data frame handling I0820 23:18:37.838703 6 log.go:172] (0xc0016b4580) Data frame received for 3 I0820 23:18:37.838721 6 log.go:172] (0xc002ae8be0) (3) Data frame handling I0820 23:18:37.838735 6 log.go:172] (0xc002ae8be0) (3) Data frame sent I0820 23:18:37.838746 6 log.go:172] (0xc0016b4580) Data frame received for 3 I0820 23:18:37.838756 6 log.go:172] (0xc002ae8be0) (3) Data frame handling I0820 23:18:37.840226 6 log.go:172] (0xc0016b4580) Data frame received for 1 I0820 23:18:37.840249 6 log.go:172] (0xc001d2d860) (1) Data frame handling I0820 23:18:37.840266 6 log.go:172] (0xc001d2d860) (1) Data frame sent I0820 23:18:37.840282 6 log.go:172] (0xc0016b4580) (0xc001d2d860) Stream removed, broadcasting: 1 I0820 23:18:37.840306 6 log.go:172] (0xc0016b4580) Go away received I0820 23:18:37.840423 6 log.go:172] (0xc0016b4580) (0xc001d2d860) Stream removed, broadcasting: 1 I0820 23:18:37.840445 6 log.go:172] (0xc0016b4580) (0xc002ae8be0) Stream removed, broadcasting: 3 I0820 23:18:37.840452 6 log.go:172] (0xc0016b4580) (0xc002ae8c80) Stream removed, broadcasting: 5 Aug 20 23:18:37.840: INFO: Found all expected endpoints: [netserver-0] Aug 20 23:18:37.843: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.26:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5069 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 23:18:37.843: INFO: >>> kubeConfig: /root/.kube/config I0820 23:18:37.873106 6 log.go:172] (0xc001102a50) (0xc002dbcbe0) Create stream I0820 23:18:37.873125 6 log.go:172] (0xc001102a50) (0xc002dbcbe0) Stream added, broadcasting: 1 I0820 23:18:37.875042 6 log.go:172] (0xc001102a50) Reply frame received for 1 I0820 23:18:37.875097 6 log.go:172] (0xc001102a50) (0xc002dbce60) Create stream I0820 23:18:37.875106 6 log.go:172] (0xc001102a50) (0xc002dbce60) Stream added, broadcasting: 3 I0820 23:18:37.876051 6 log.go:172] (0xc001102a50) Reply frame received for 3 I0820 23:18:37.876075 6 log.go:172] (0xc001102a50) (0xc002ae8d20) Create stream I0820 23:18:37.876082 6 log.go:172] (0xc001102a50) (0xc002ae8d20) Stream added, broadcasting: 5 I0820 23:18:37.876960 6 log.go:172] (0xc001102a50) Reply frame received for 5 I0820 23:18:37.957194 6 log.go:172] (0xc001102a50) Data frame received for 3 I0820 23:18:37.957241 6 log.go:172] (0xc002dbce60) (3) Data frame handling I0820 23:18:37.957269 6 log.go:172] (0xc002dbce60) (3) Data frame sent I0820 23:18:37.957294 6 log.go:172] (0xc001102a50) Data frame received for 3 I0820 23:18:37.957314 6 log.go:172] (0xc002dbce60) (3) Data frame handling I0820 23:18:37.957351 6 log.go:172] (0xc001102a50) Data frame received for 5 I0820 23:18:37.957374 6 log.go:172] (0xc002ae8d20) (5) Data frame handling I0820 23:18:37.959270 6 log.go:172] (0xc001102a50) Data frame received for 1 I0820 23:18:37.959300 6 log.go:172] (0xc002dbcbe0) (1) Data frame handling I0820 23:18:37.959328 6 log.go:172] (0xc002dbcbe0) (1) Data frame sent I0820 23:18:37.959354 6 log.go:172] (0xc001102a50) (0xc002dbcbe0) Stream removed, broadcasting: 1 I0820 23:18:37.959408 6 log.go:172] (0xc001102a50) Go away received I0820 23:18:37.959519 6 log.go:172] (0xc001102a50) (0xc002dbcbe0) Stream removed, broadcasting: 1 I0820 23:18:37.959558 6 log.go:172] (0xc001102a50) (0xc002dbce60) Stream removed, broadcasting: 3 I0820 23:18:37.959584 6 log.go:172] (0xc001102a50) (0xc002ae8d20) Stream removed, broadcasting: 5 Aug 20 23:18:37.959: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:18:37.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5069" for this suite. • [SLOW TEST:26.452 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2165,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:18:37.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:18:38.101: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"48f74111-b688-422d-b5f3-3d1d12922a66", Controller:(*bool)(0xc0060783f2), BlockOwnerDeletion:(*bool)(0xc0060783f3)}} Aug 20 23:18:38.108: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f8fcb0ef-c712-4d62-bc39-594ef5d84e01", Controller:(*bool)(0xc0033dbbea), BlockOwnerDeletion:(*bool)(0xc0033dbbeb)}} Aug 20 23:18:38.140: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"589903ac-f6e4-4590-befc-878e84ef7807", Controller:(*bool)(0xc00607859a), BlockOwnerDeletion:(*bool)(0xc00607859b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:18:43.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4915" for this suite. • [SLOW TEST:5.223 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":127,"skipped":2172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:18:43.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Aug 20 23:18:43.328: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix568050068/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:18:43.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4687" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":128,"skipped":2224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:18:43.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 20 23:18:44.291: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951781 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 23:18:44.291: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951781 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 20 23:18:54.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951836 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 20 23:18:54.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951836 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 20 23:19:04.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951866 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 23:19:04.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951866 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 20 23:19:14.326: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951896 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 23:19:14.326: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-a ee6915fd-da93-4457-b448-ad4164f422cf 1951896 0 2020-08-20 23:18:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 20 23:19:24.334: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-b 37400c77-60a1-4386-9717-b0b492f09ff6 1951926 0 2020-08-20 23:19:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 23:19:24.334: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-b 37400c77-60a1-4386-9717-b0b492f09ff6 1951926 0 2020-08-20 23:19:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 20 23:19:34.345: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-b 37400c77-60a1-4386-9717-b0b492f09ff6 1951956 0 2020-08-20 23:19:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 23:19:34.345: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8003 /api/v1/namespaces/watch-8003/configmaps/e2e-watch-test-configmap-b 37400c77-60a1-4386-9717-b0b492f09ff6 1951956 0 2020-08-20 23:19:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:19:44.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8003" for this suite. • [SLOW TEST:60.802 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":129,"skipped":2247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:19:44.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:19:45.451: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:19:47.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562385, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562385, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562385, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562385, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:19:51.128: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:19:55.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3990" for this suite. STEP: Destroying namespace "webhook-3990-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.670 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":130,"skipped":2282,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:19:56.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0820 23:20:36.998604 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 23:20:36.998: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:20:36.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9470" for this suite. • [SLOW TEST:40.980 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":131,"skipped":2292,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:20:37.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-05911f85-d8cf-4ebb-b095-fca8873e64d5 STEP: Creating a pod to test consume configMaps Aug 20 23:20:37.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104" in namespace "configmap-9563" to be "success or failure" Aug 20 23:20:37.106: INFO: Pod "pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104": Phase="Pending", Reason="", readiness=false. Elapsed: 9.96683ms Aug 20 23:20:39.124: INFO: Pod "pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027917604s Aug 20 23:20:41.130: INFO: Pod "pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034019366s STEP: Saw pod success Aug 20 23:20:41.130: INFO: Pod "pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104" satisfied condition "success or failure" Aug 20 23:20:41.133: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104 container configmap-volume-test: STEP: delete the pod Aug 20 23:20:41.154: INFO: Waiting for pod pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104 to disappear Aug 20 23:20:41.180: INFO: Pod pod-configmaps-b6a36a2d-ea06-40b9-9112-32185dc44104 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:20:41.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9563" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2299,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:20:41.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 20 23:20:41.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba" in namespace "projected-9058" to be "success or failure" Aug 20 23:20:41.309: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Pending", Reason="", readiness=false. Elapsed: 9.853124ms Aug 20 23:20:43.529: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229303632s Aug 20 23:20:45.670: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370505206s Aug 20 23:20:47.783: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Running", Reason="", readiness=true. Elapsed: 6.483847758s Aug 20 23:20:49.851: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Running", Reason="", readiness=true. Elapsed: 8.551961704s Aug 20 23:20:51.855: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555950732s STEP: Saw pod success Aug 20 23:20:51.855: INFO: Pod "downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba" satisfied condition "success or failure" Aug 20 23:20:51.859: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba container client-container: STEP: delete the pod Aug 20 23:20:51.879: INFO: Waiting for pod downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba to disappear Aug 20 23:20:51.881: INFO: Pod downwardapi-volume-3ecf1073-1b28-4e14-aed1-5704899937ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:20:51.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9058" for this suite. • [SLOW TEST:10.721 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2320,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:20:51.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 20 23:20:51.976: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:21:02.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5555" for this suite. • [SLOW TEST:10.183 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":134,"skipped":2332,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:21:02.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:21:02.141: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 20 23:21:04.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5469 create -f -' Aug 20 23:21:07.255: INFO: stderr: "" Aug 20 23:21:07.255: INFO: stdout: "e2e-test-crd-publish-openapi-7046-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 20 23:21:07.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5469 delete e2e-test-crd-publish-openapi-7046-crds test-cr' Aug 20 23:21:07.378: INFO: stderr: "" Aug 20 23:21:07.378: INFO: stdout: "e2e-test-crd-publish-openapi-7046-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 20 23:21:07.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5469 apply -f -' Aug 20 23:21:07.721: INFO: stderr: "" Aug 20 23:21:07.721: INFO: stdout: "e2e-test-crd-publish-openapi-7046-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 20 23:21:07.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5469 delete e2e-test-crd-publish-openapi-7046-crds test-cr' Aug 20 23:21:07.833: INFO: stderr: "" Aug 20 23:21:07.833: INFO: stdout: "e2e-test-crd-publish-openapi-7046-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 20 23:21:07.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7046-crds' Aug 20 23:21:08.043: INFO: stderr: "" Aug 20 23:21:08.043: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7046-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:21:10.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5469" for this suite. • [SLOW TEST:8.857 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":135,"skipped":2336,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:21:10.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:21:10.982: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 20 23:21:13.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8192 create -f -' Aug 20 23:21:19.966: INFO: stderr: "" Aug 20 23:21:19.966: INFO: stdout: "e2e-test-crd-publish-openapi-6853-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 20 23:21:19.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8192 delete e2e-test-crd-publish-openapi-6853-crds test-cr' Aug 20 23:21:20.313: INFO: stderr: "" Aug 20 23:21:20.313: INFO: stdout: "e2e-test-crd-publish-openapi-6853-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 20 23:21:20.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8192 apply -f -' Aug 20 23:21:20.596: INFO: stderr: "" Aug 20 23:21:20.596: INFO: stdout: "e2e-test-crd-publish-openapi-6853-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 20 23:21:20.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8192 delete e2e-test-crd-publish-openapi-6853-crds test-cr' Aug 20 23:21:20.692: INFO: stderr: "" Aug 20 23:21:20.692: INFO: stdout: "e2e-test-crd-publish-openapi-6853-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 20 23:21:20.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6853-crds' Aug 20 23:21:20.949: INFO: stderr: "" Aug 20 23:21:20.949: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6853-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:21:22.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8192" for this suite. • [SLOW TEST:11.880 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":136,"skipped":2343,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:21:22.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 20 23:21:23.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 20 23:21:25.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 23:21:27.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562483, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 20 23:21:31.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:21:41.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9044" for this suite. STEP: Destroying namespace "webhook-9044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.026 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":137,"skipped":2348,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:21:42.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-538 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-538 Aug 20 23:21:43.708: INFO: Found 0 stateful pods, waiting for 1 Aug 20 23:21:53.712: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 20 23:21:53.742: INFO: Deleting all statefulset in ns statefulset-538 Aug 20 23:21:53.761: INFO: Scaling statefulset ss to 0 Aug 20 23:22:13.838: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 23:22:13.842: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 20 23:22:13.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-538" for this suite. • [SLOW TEST:31.006 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":138,"skipped":2349,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 20 23:22:13.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 20 23:22:13.995: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:22:30.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7739" for this suite.

• [SLOW TEST:16.154 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":140,"skipped":2368,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:22:30.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 20 23:22:31.069: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 20 23:22:31.328: INFO: Waiting for terminating namespaces to be deleted...
Aug 20 23:22:31.431: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 20 23:22:31.469: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.469: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 23:22:31.469: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.469: INFO: 	Container app ready: true, restart count 0
Aug 20 23:22:31.469: INFO: fail-once-local-kv6kk from job-7739 started at 2020-08-20 23:22:14 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.469: INFO: 	Container c ready: false, restart count 1
Aug 20 23:22:31.469: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.469: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 23:22:31.469: INFO: fail-once-local-p7fj2 from job-7739 started at 2020-08-20 23:22:21 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.469: INFO: 	Container c ready: false, restart count 1
Aug 20 23:22:31.469: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 20 23:22:31.477: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.477: INFO: 	Container app ready: true, restart count 0
Aug 20 23:22:31.477: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.477: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 23:22:31.477: INFO: fail-once-local-5mtz9 from job-7739 started at 2020-08-20 23:22:21 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.477: INFO: 	Container c ready: false, restart count 1
Aug 20 23:22:31.477: INFO: fail-once-local-fzbqw from job-7739 started at 2020-08-20 23:22:14 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.477: INFO: 	Container c ready: false, restart count 1
Aug 20 23:22:31.477: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:22:31.477: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-da8f069f-9674-4d0e-9c05-0570666046ac 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-da8f069f-9674-4d0e-9c05-0570666046ac off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-da8f069f-9674-4d0e-9c05-0570666046ac
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:22:39.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4083" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:9.715 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":141,"skipped":2387,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:22:39.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 20 23:22:40.061: INFO: >>> kubeConfig: /root/.kube/config
Aug 20 23:22:43.000: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:22:54.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2156" for this suite.

• [SLOW TEST:14.749 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":142,"skipped":2409,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:22:54.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 20 23:22:54.801: INFO: Waiting up to 5m0s for pod "pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4" in namespace "emptydir-2782" to be "success or failure"
Aug 20 23:22:54.843: INFO: Pod "pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.69216ms
Aug 20 23:22:56.922: INFO: Pod "pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120601079s
Aug 20 23:22:58.926: INFO: Pod "pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124604893s
STEP: Saw pod success
Aug 20 23:22:58.926: INFO: Pod "pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4" satisfied condition "success or failure"
Aug 20 23:22:58.929: INFO: Trying to get logs from node jerma-worker2 pod pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4 container test-container: 
STEP: delete the pod
Aug 20 23:22:59.032: INFO: Waiting for pod pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4 to disappear
Aug 20 23:22:59.100: INFO: Pod pod-f123cfa3-1292-4d3b-93c4-6fcddd1dd3e4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:22:59.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2782" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2412,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:22:59.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 20 23:23:00.335: INFO: created pod pod-service-account-defaultsa
Aug 20 23:23:00.335: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 20 23:23:00.343: INFO: created pod pod-service-account-mountsa
Aug 20 23:23:00.343: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 20 23:23:00.401: INFO: created pod pod-service-account-nomountsa
Aug 20 23:23:00.401: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 20 23:23:00.434: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 20 23:23:00.434: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 20 23:23:00.445: INFO: created pod pod-service-account-mountsa-mountspec
Aug 20 23:23:00.445: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 20 23:23:00.516: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 20 23:23:00.516: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 20 23:23:00.552: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 20 23:23:00.552: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 20 23:23:00.583: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 20 23:23:00.583: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 20 23:23:00.970: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 20 23:23:00.970: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:00.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7772" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":144,"skipped":2437,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:01.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:23:03.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:23:05.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:23:08.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:23:10.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:23:12.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:23:13.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:23:16.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562584, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562583, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:23:19.130: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:23:19.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:20.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1465" for this suite.
STEP: Destroying namespace "webhook-1465-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.368 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":145,"skipped":2440,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:20.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-87405a4b-37e9-4e2a-9af3-0816b02596a9
STEP: Creating a pod to test consume secrets
Aug 20 23:23:20.540: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0" in namespace "projected-5043" to be "success or failure"
Aug 20 23:23:20.562: INFO: Pod "pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.513591ms
Aug 20 23:23:22.598: INFO: Pod "pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057929175s
Aug 20 23:23:24.602: INFO: Pod "pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061612865s
STEP: Saw pod success
Aug 20 23:23:24.602: INFO: Pod "pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0" satisfied condition "success or failure"
Aug 20 23:23:24.604: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0 container secret-volume-test: 
STEP: delete the pod
Aug 20 23:23:24.644: INFO: Waiting for pod pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0 to disappear
Aug 20 23:23:24.656: INFO: Pod pod-projected-secrets-7457f8b1-1cb7-4ebe-a7e8-d24adf6026b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:24.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5043" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2453,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:24.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:36.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9077" for this suite.

• [SLOW TEST:11.364 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":147,"skipped":2458,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:36.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:23:36.077: INFO: Creating ReplicaSet my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6
Aug 20 23:23:36.131: INFO: Pod name my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6: Found 0 pods out of 1
Aug 20 23:23:41.134: INFO: Pod name my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6: Found 1 pods out of 1
Aug 20 23:23:41.134: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6" is running
Aug 20 23:23:41.136: INFO: Pod "my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6-88cqn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 23:23:36 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 23:23:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 23:23:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 23:23:36 +0000 UTC Reason: Message:}])
Aug 20 23:23:41.136: INFO: Trying to dial the pod
Aug 20 23:23:46.146: INFO: Controller my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6: Got expected result from replica 1 [my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6-88cqn]: "my-hostname-basic-7955db28-b84c-493c-b315-e738d0e2b8c6-88cqn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:46.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9799" for this suite.

• [SLOW TEST:10.127 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":148,"skipped":2460,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:46.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-9cc7c000-c291-4bf8-a31c-844e4b4895c5
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:46.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3343" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":149,"skipped":2461,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:46.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:23:46.356: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838" in namespace "security-context-test-4074" to be "success or failure"
Aug 20 23:23:46.420: INFO: Pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838": Phase="Pending", Reason="", readiness=false. Elapsed: 63.625554ms
Aug 20 23:23:48.423: INFO: Pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066913256s
Aug 20 23:23:50.427: INFO: Pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071068621s
Aug 20 23:23:52.443: INFO: Pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086740007s
Aug 20 23:23:52.443: INFO: Pod "busybox-user-65534-8d13df6a-c20a-436b-9325-99e6c9230838" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:52.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4074" for this suite.

• [SLOW TEST:6.188 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:52.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d4974686-f482-44f5-9eec-700245eabac5
STEP: Creating a pod to test consume secrets
Aug 20 23:23:52.588: INFO: Waiting up to 5m0s for pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b" in namespace "secrets-5862" to be "success or failure"
Aug 20 23:23:52.592: INFO: Pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375074ms
Aug 20 23:23:54.596: INFO: Pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007935517s
Aug 20 23:23:56.684: INFO: Pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b": Phase="Running", Reason="", readiness=true. Elapsed: 4.096158393s
Aug 20 23:23:58.700: INFO: Pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112100009s
STEP: Saw pod success
Aug 20 23:23:58.700: INFO: Pod "pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b" satisfied condition "success or failure"
Aug 20 23:23:58.702: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b container secret-volume-test: 
STEP: delete the pod
Aug 20 23:23:58.755: INFO: Waiting for pod pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b to disappear
Aug 20 23:23:58.874: INFO: Pod pod-secrets-da730f95-1724-4df4-8b0d-a1cfc3ca708b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:23:58.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5862" for this suite.

• [SLOW TEST:6.501 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2500,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:23:58.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:23:59.217: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:04.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8326" for this suite.

• [SLOW TEST:5.727 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":152,"skipped":2503,"failed":0}
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:04.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 20 23:24:04.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-651'
Aug 20 23:24:05.071: INFO: stderr: ""
Aug 20 23:24:05.071: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 20 23:24:06.076: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:06.076: INFO: Found 0 / 1
Aug 20 23:24:07.162: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:07.162: INFO: Found 0 / 1
Aug 20 23:24:08.075: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:08.075: INFO: Found 0 / 1
Aug 20 23:24:09.084: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:09.084: INFO: Found 1 / 1
Aug 20 23:24:09.084: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 20 23:24:09.087: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:09.087: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 20 23:24:09.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-49ghr --namespace=kubectl-651 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 20 23:24:09.192: INFO: stderr: ""
Aug 20 23:24:09.192: INFO: stdout: "pod/agnhost-master-49ghr patched\n"
STEP: checking annotations
Aug 20 23:24:09.206: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 20 23:24:09.206: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:09.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-651" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":153,"skipped":2503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:09.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:24:09.317: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 20 23:24:09.353: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 20 23:24:14.485: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 20 23:24:14.486: INFO: Creating deployment "test-rolling-update-deployment"
Aug 20 23:24:14.569: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 20 23:24:14.583: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 20 23:24:16.601: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 20 23:24:16.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562654, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:24:18.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562655, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733562654, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:24:20.606: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 20 23:24:20.614: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-2653 /apis/apps/v1/namespaces/deployment-2653/deployments/test-rolling-update-deployment 2ca4a3c7-04eb-420c-bb74-f8d5df0abf26 1953937 1 2020-08-20 23:24:14 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00464ad68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-20 23:24:15 +0000 UTC,LastTransitionTime:2020-08-20 23:24:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-20 23:24:19 +0000 UTC,LastTransitionTime:2020-08-20 23:24:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 20 23:24:20.616: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-2653 /apis/apps/v1/namespaces/deployment-2653/replicasets/test-rolling-update-deployment-67cf4f6444 373548da-c4da-4792-a9c1-a96cf99ed77a 1953926 1 2020-08-20 23:24:14 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2ca4a3c7-04eb-420c-bb74-f8d5df0abf26 0xc00464b3b7 0xc00464b3b8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00464b438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:24:20.616: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 20 23:24:20.616: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-2653 /apis/apps/v1/namespaces/deployment-2653/replicasets/test-rolling-update-controller 22cb534d-2784-4d67-884c-26205f6e7651 1953935 2 2020-08-20 23:24:09 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2ca4a3c7-04eb-420c-bb74-f8d5df0abf26 0xc00464b297 0xc00464b298}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00464b318  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:24:20.619: INFO: Pod "test-rolling-update-deployment-67cf4f6444-xh624" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-xh624 test-rolling-update-deployment-67cf4f6444- deployment-2653 /api/v1/namespaces/deployment-2653/pods/test-rolling-update-deployment-67cf4f6444-xh624 0da116f4-0732-4334-ad33-127e68385951 1953925 0 2020-08-20 23:24:14 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 373548da-c4da-4792-a9c1-a96cf99ed77a 0xc004673837 0xc004673838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7rgq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7rgq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7rgq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:24:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:24:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:24:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.49,StartTime:2020-08-20 23:24:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:24:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0bcf94944a641bd168a08582e8cc3149617e579e871dc8a7c1fd40b079cd8681,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:20.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2653" for this suite.

• [SLOW TEST:11.410 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":154,"skipped":2556,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:20.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:24:20.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 20 23:24:23.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8748 create -f -'
Aug 20 23:24:27.746: INFO: stderr: ""
Aug 20 23:24:27.746: INFO: stdout: "e2e-test-crd-publish-openapi-3109-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 20 23:24:27.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8748 delete e2e-test-crd-publish-openapi-3109-crds test-cr'
Aug 20 23:24:27.855: INFO: stderr: ""
Aug 20 23:24:27.855: INFO: stdout: "e2e-test-crd-publish-openapi-3109-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 20 23:24:27.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8748 apply -f -'
Aug 20 23:24:28.102: INFO: stderr: ""
Aug 20 23:24:28.102: INFO: stdout: "e2e-test-crd-publish-openapi-3109-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 20 23:24:28.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8748 delete e2e-test-crd-publish-openapi-3109-crds test-cr'
Aug 20 23:24:28.239: INFO: stderr: ""
Aug 20 23:24:28.239: INFO: stdout: "e2e-test-crd-publish-openapi-3109-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 20 23:24:28.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3109-crds'
Aug 20 23:24:28.512: INFO: stderr: ""
Aug 20 23:24:28.512: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3109-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:31.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8748" for this suite.

• [SLOW TEST:10.757 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":155,"skipped":2563,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:31.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 20 23:24:35.872: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:36.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6934" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2567,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:36.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 20 23:24:41.414: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:42.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9353" for this suite.

• [SLOW TEST:6.390 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":157,"skipped":2572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:42.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9101" for this suite.
STEP: Destroying namespace "nsdeletetest-845" for this suite.
Aug 20 23:24:48.929: INFO: Namespace nsdeletetest-845 was already deleted
STEP: Destroying namespace "nsdeletetest-976" for this suite.

• [SLOW TEST:6.458 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":158,"skipped":2597,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:48.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 20 23:24:49.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0" in namespace "projected-3746" to be "success or failure"
Aug 20 23:24:49.007: INFO: Pod "downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137222ms
Aug 20 23:24:51.033: INFO: Pod "downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0304594s
Aug 20 23:24:53.037: INFO: Pod "downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034199672s
STEP: Saw pod success
Aug 20 23:24:53.037: INFO: Pod "downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0" satisfied condition "success or failure"
Aug 20 23:24:53.040: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0 container client-container: 
STEP: delete the pod
Aug 20 23:24:53.342: INFO: Waiting for pod downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0 to disappear
Aug 20 23:24:53.345: INFO: Pod downwardapi-volume-056e26bb-6bb2-4279-8fe8-20c1d98214e0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:53.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3746" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2597,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:53.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:53.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1804" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":160,"skipped":2609,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:53.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 20 23:24:54.044: INFO: Created pod &Pod{ObjectMeta:{dns-3813  dns-3813 /api/v1/namespaces/dns-3813/pods/dns-3813 bdbd9a27-9ce6-4cd1-9895-51e9b3918e26 1954202 0 2020-08-20 23:24:54 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-glcnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-glcnd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-glcnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 20 23:24:58.070: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3813 PodName:dns-3813 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 23:24:58.070: INFO: >>> kubeConfig: /root/.kube/config
I0820 23:24:58.103408       6 log.go:172] (0xc003c34a50) (0xc0019dafa0) Create stream
I0820 23:24:58.103449       6 log.go:172] (0xc003c34a50) (0xc0019dafa0) Stream added, broadcasting: 1
I0820 23:24:58.105145       6 log.go:172] (0xc003c34a50) Reply frame received for 1
I0820 23:24:58.105193       6 log.go:172] (0xc003c34a50) (0xc0015100a0) Create stream
I0820 23:24:58.105210       6 log.go:172] (0xc003c34a50) (0xc0015100a0) Stream added, broadcasting: 3
I0820 23:24:58.105985       6 log.go:172] (0xc003c34a50) Reply frame received for 3
I0820 23:24:58.106014       6 log.go:172] (0xc003c34a50) (0xc0019db220) Create stream
I0820 23:24:58.106030       6 log.go:172] (0xc003c34a50) (0xc0019db220) Stream added, broadcasting: 5
I0820 23:24:58.106715       6 log.go:172] (0xc003c34a50) Reply frame received for 5
I0820 23:24:58.173379       6 log.go:172] (0xc003c34a50) Data frame received for 3
I0820 23:24:58.173411       6 log.go:172] (0xc0015100a0) (3) Data frame handling
I0820 23:24:58.173452       6 log.go:172] (0xc0015100a0) (3) Data frame sent
I0820 23:24:58.175648       6 log.go:172] (0xc003c34a50) Data frame received for 3
I0820 23:24:58.175698       6 log.go:172] (0xc0015100a0) (3) Data frame handling
I0820 23:24:58.175725       6 log.go:172] (0xc003c34a50) Data frame received for 5
I0820 23:24:58.175740       6 log.go:172] (0xc0019db220) (5) Data frame handling
I0820 23:24:58.177609       6 log.go:172] (0xc003c34a50) Data frame received for 1
I0820 23:24:58.177641       6 log.go:172] (0xc0019dafa0) (1) Data frame handling
I0820 23:24:58.177658       6 log.go:172] (0xc0019dafa0) (1) Data frame sent
I0820 23:24:58.177674       6 log.go:172] (0xc003c34a50) (0xc0019dafa0) Stream removed, broadcasting: 1
I0820 23:24:58.177688       6 log.go:172] (0xc003c34a50) Go away received
I0820 23:24:58.177795       6 log.go:172] (0xc003c34a50) (0xc0019dafa0) Stream removed, broadcasting: 1
I0820 23:24:58.177825       6 log.go:172] (0xc003c34a50) (0xc0015100a0) Stream removed, broadcasting: 3
I0820 23:24:58.177847       6 log.go:172] (0xc003c34a50) (0xc0019db220) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 20 23:24:58.177: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3813 PodName:dns-3813 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 23:24:58.177: INFO: >>> kubeConfig: /root/.kube/config
I0820 23:24:58.210028       6 log.go:172] (0xc00528a370) (0xc0015ee5a0) Create stream
I0820 23:24:58.210058       6 log.go:172] (0xc00528a370) (0xc0015ee5a0) Stream added, broadcasting: 1
I0820 23:24:58.212066       6 log.go:172] (0xc00528a370) Reply frame received for 1
I0820 23:24:58.212111       6 log.go:172] (0xc00528a370) (0xc0025b2e60) Create stream
I0820 23:24:58.212128       6 log.go:172] (0xc00528a370) (0xc0025b2e60) Stream added, broadcasting: 3
I0820 23:24:58.213687       6 log.go:172] (0xc00528a370) Reply frame received for 3
I0820 23:24:58.213716       6 log.go:172] (0xc00528a370) (0xc0025b2fa0) Create stream
I0820 23:24:58.213732       6 log.go:172] (0xc00528a370) (0xc0025b2fa0) Stream added, broadcasting: 5
I0820 23:24:58.214603       6 log.go:172] (0xc00528a370) Reply frame received for 5
I0820 23:24:58.284376       6 log.go:172] (0xc00528a370) Data frame received for 3
I0820 23:24:58.284426       6 log.go:172] (0xc0025b2e60) (3) Data frame handling
I0820 23:24:58.284460       6 log.go:172] (0xc0025b2e60) (3) Data frame sent
I0820 23:24:58.286767       6 log.go:172] (0xc00528a370) Data frame received for 3
I0820 23:24:58.286791       6 log.go:172] (0xc0025b2e60) (3) Data frame handling
I0820 23:24:58.287057       6 log.go:172] (0xc00528a370) Data frame received for 5
I0820 23:24:58.287086       6 log.go:172] (0xc0025b2fa0) (5) Data frame handling
I0820 23:24:58.288804       6 log.go:172] (0xc00528a370) Data frame received for 1
I0820 23:24:58.288822       6 log.go:172] (0xc0015ee5a0) (1) Data frame handling
I0820 23:24:58.288828       6 log.go:172] (0xc0015ee5a0) (1) Data frame sent
I0820 23:24:58.288845       6 log.go:172] (0xc00528a370) (0xc0015ee5a0) Stream removed, broadcasting: 1
I0820 23:24:58.288901       6 log.go:172] (0xc00528a370) (0xc0015ee5a0) Stream removed, broadcasting: 1
I0820 23:24:58.288911       6 log.go:172] (0xc00528a370) (0xc0025b2e60) Stream removed, broadcasting: 3
I0820 23:24:58.288921       6 log.go:172] (0xc00528a370) (0xc0025b2fa0) Stream removed, broadcasting: 5
Aug 20 23:24:58.288: INFO: Deleting pod dns-3813...
I0820 23:24:58.288949       6 log.go:172] (0xc00528a370) Go away received
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:24:58.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3813" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":161,"skipped":2611,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:24:58.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8591
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-8591
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8591
Aug 20 23:24:58.401: INFO: Found 0 stateful pods, waiting for 1
Aug 20 23:25:08.406: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 20 23:25:08.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 20 23:25:08.670: INFO: stderr: "I0820 23:25:08.546818    2319 log.go:172] (0xc000a63080) (0xc000b88d20) Create stream\nI0820 23:25:08.546871    2319 log.go:172] (0xc000a63080) (0xc000b88d20) Stream added, broadcasting: 1\nI0820 23:25:08.548644    2319 log.go:172] (0xc000a63080) Reply frame received for 1\nI0820 23:25:08.548679    2319 log.go:172] (0xc000a63080) (0xc000a92280) Create stream\nI0820 23:25:08.548690    2319 log.go:172] (0xc000a63080) (0xc000a92280) Stream added, broadcasting: 3\nI0820 23:25:08.549776    2319 log.go:172] (0xc000a63080) Reply frame received for 3\nI0820 23:25:08.549823    2319 log.go:172] (0xc000a63080) (0xc000a40280) Create stream\nI0820 23:25:08.549856    2319 log.go:172] (0xc000a63080) (0xc000a40280) Stream added, broadcasting: 5\nI0820 23:25:08.550778    2319 log.go:172] (0xc000a63080) Reply frame received for 5\nI0820 23:25:08.623005    2319 log.go:172] (0xc000a63080) Data frame received for 5\nI0820 23:25:08.623032    2319 log.go:172] (0xc000a40280) (5) Data frame handling\nI0820 23:25:08.623053    2319 log.go:172] (0xc000a40280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0820 23:25:08.658353    2319 log.go:172] (0xc000a63080) Data frame received for 3\nI0820 23:25:08.658456    2319 log.go:172] (0xc000a92280) (3) Data frame handling\nI0820 23:25:08.658510    2319 log.go:172] (0xc000a92280) (3) Data frame sent\nI0820 23:25:08.658740    2319 log.go:172] (0xc000a63080) Data frame received for 5\nI0820 23:25:08.658779    2319 log.go:172] (0xc000a40280) (5) Data frame handling\nI0820 23:25:08.658848    2319 log.go:172] (0xc000a63080) Data frame received for 3\nI0820 23:25:08.658885    2319 log.go:172] (0xc000a92280) (3) Data frame handling\nI0820 23:25:08.660849    2319 log.go:172] (0xc000a63080) Data frame received for 1\nI0820 23:25:08.660881    2319 log.go:172] (0xc000b88d20) (1) Data frame handling\nI0820 23:25:08.660896    2319 log.go:172] (0xc000b88d20) (1) Data frame sent\nI0820 23:25:08.660918    2319 log.go:172] (0xc000a63080) (0xc000b88d20) Stream removed, broadcasting: 1\nI0820 23:25:08.660945    2319 log.go:172] (0xc000a63080) Go away received\nI0820 23:25:08.661209    2319 log.go:172] (0xc000a63080) (0xc000b88d20) Stream removed, broadcasting: 1\nI0820 23:25:08.661223    2319 log.go:172] (0xc000a63080) (0xc000a92280) Stream removed, broadcasting: 3\nI0820 23:25:08.661232    2319 log.go:172] (0xc000a63080) (0xc000a40280) Stream removed, broadcasting: 5\n"
Aug 20 23:25:08.670: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 20 23:25:08.670: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 20 23:25:08.673: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 20 23:25:18.677: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 23:25:18.677: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 23:25:18.815: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 20 23:25:18.815: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:18.815: INFO: 
Aug 20 23:25:18.816: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 20 23:25:20.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.870611854s
Aug 20 23:25:21.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.301702704s
Aug 20 23:25:22.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.253901888s
Aug 20 23:25:23.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.182123232s
Aug 20 23:25:24.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.176455481s
Aug 20 23:25:25.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.171321146s
Aug 20 23:25:26.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.157752178s
Aug 20 23:25:27.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.586045ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8591
Aug 20 23:25:28.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:25:28.949: INFO: stderr: "I0820 23:25:28.878863    2339 log.go:172] (0xc0009ac160) (0xc00070bcc0) Create stream\nI0820 23:25:28.878928    2339 log.go:172] (0xc0009ac160) (0xc00070bcc0) Stream added, broadcasting: 1\nI0820 23:25:28.881803    2339 log.go:172] (0xc0009ac160) Reply frame received for 1\nI0820 23:25:28.881840    2339 log.go:172] (0xc0009ac160) (0xc00070bd60) Create stream\nI0820 23:25:28.881851    2339 log.go:172] (0xc0009ac160) (0xc00070bd60) Stream added, broadcasting: 3\nI0820 23:25:28.882945    2339 log.go:172] (0xc0009ac160) Reply frame received for 3\nI0820 23:25:28.882990    2339 log.go:172] (0xc0009ac160) (0xc000654640) Create stream\nI0820 23:25:28.883009    2339 log.go:172] (0xc0009ac160) (0xc000654640) Stream added, broadcasting: 5\nI0820 23:25:28.884128    2339 log.go:172] (0xc0009ac160) Reply frame received for 5\nI0820 23:25:28.942182    2339 log.go:172] (0xc0009ac160) Data frame received for 5\nI0820 23:25:28.942223    2339 log.go:172] (0xc000654640) (5) Data frame handling\nI0820 23:25:28.942245    2339 log.go:172] (0xc000654640) (5) Data frame sent\nI0820 23:25:28.942258    2339 log.go:172] (0xc0009ac160) Data frame received for 5\nI0820 23:25:28.942270    2339 log.go:172] (0xc000654640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0820 23:25:28.942300    2339 log.go:172] (0xc0009ac160) Data frame received for 3\nI0820 23:25:28.942318    2339 log.go:172] (0xc00070bd60) (3) Data frame handling\nI0820 23:25:28.942333    2339 log.go:172] (0xc00070bd60) (3) Data frame sent\nI0820 23:25:28.942346    2339 log.go:172] (0xc0009ac160) Data frame received for 3\nI0820 23:25:28.942359    2339 log.go:172] (0xc00070bd60) (3) Data frame handling\nI0820 23:25:28.943175    2339 log.go:172] (0xc0009ac160) Data frame received for 1\nI0820 23:25:28.943196    2339 log.go:172] (0xc00070bcc0) (1) Data frame handling\nI0820 23:25:28.943204    2339 log.go:172] (0xc00070bcc0) (1) Data frame sent\nI0820 23:25:28.943216    2339 log.go:172] (0xc0009ac160) (0xc00070bcc0) Stream removed, broadcasting: 1\nI0820 23:25:28.943347    2339 log.go:172] (0xc0009ac160) Go away received\nI0820 23:25:28.943494    2339 log.go:172] (0xc0009ac160) (0xc00070bcc0) Stream removed, broadcasting: 1\nI0820 23:25:28.943506    2339 log.go:172] (0xc0009ac160) (0xc00070bd60) Stream removed, broadcasting: 3\nI0820 23:25:28.943512    2339 log.go:172] (0xc0009ac160) (0xc000654640) Stream removed, broadcasting: 5\n"
Aug 20 23:25:28.949: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 20 23:25:28.949: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 20 23:25:28.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:25:29.141: INFO: stderr: "I0820 23:25:29.076186    2359 log.go:172] (0xc0009d53f0) (0xc000a9e820) Create stream\nI0820 23:25:29.076259    2359 log.go:172] (0xc0009d53f0) (0xc000a9e820) Stream added, broadcasting: 1\nI0820 23:25:29.080453    2359 log.go:172] (0xc0009d53f0) Reply frame received for 1\nI0820 23:25:29.080496    2359 log.go:172] (0xc0009d53f0) (0xc00080bae0) Create stream\nI0820 23:25:29.080508    2359 log.go:172] (0xc0009d53f0) (0xc00080bae0) Stream added, broadcasting: 3\nI0820 23:25:29.081593    2359 log.go:172] (0xc0009d53f0) Reply frame received for 3\nI0820 23:25:29.081652    2359 log.go:172] (0xc0009d53f0) (0xc0005bf4a0) Create stream\nI0820 23:25:29.081670    2359 log.go:172] (0xc0009d53f0) (0xc0005bf4a0) Stream added, broadcasting: 5\nI0820 23:25:29.082620    2359 log.go:172] (0xc0009d53f0) Reply frame received for 5\nI0820 23:25:29.133863    2359 log.go:172] (0xc0009d53f0) Data frame received for 5\nI0820 23:25:29.133901    2359 log.go:172] (0xc0005bf4a0) (5) Data frame handling\nI0820 23:25:29.133917    2359 log.go:172] (0xc0005bf4a0) (5) Data frame sent\nI0820 23:25:29.133926    2359 log.go:172] (0xc0009d53f0) Data frame received for 5\nI0820 23:25:29.133934    2359 log.go:172] (0xc0005bf4a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0820 23:25:29.133959    2359 log.go:172] (0xc0009d53f0) Data frame received for 3\nI0820 23:25:29.133973    2359 log.go:172] (0xc00080bae0) (3) Data frame handling\nI0820 23:25:29.133986    2359 log.go:172] (0xc00080bae0) (3) Data frame sent\nI0820 23:25:29.133999    2359 log.go:172] (0xc0009d53f0) Data frame received for 3\nI0820 23:25:29.134008    2359 log.go:172] (0xc00080bae0) (3) Data frame handling\nI0820 23:25:29.135286    2359 log.go:172] (0xc0009d53f0) Data frame received for 1\nI0820 23:25:29.135312    2359 log.go:172] (0xc000a9e820) (1) Data frame handling\nI0820 23:25:29.135335    2359 log.go:172] (0xc000a9e820) (1) Data frame sent\nI0820 23:25:29.135355    2359 log.go:172] (0xc0009d53f0) (0xc000a9e820) Stream removed, broadcasting: 1\nI0820 23:25:29.135374    2359 log.go:172] (0xc0009d53f0) Go away received\nI0820 23:25:29.135726    2359 log.go:172] (0xc0009d53f0) (0xc000a9e820) Stream removed, broadcasting: 1\nI0820 23:25:29.135742    2359 log.go:172] (0xc0009d53f0) (0xc00080bae0) Stream removed, broadcasting: 3\nI0820 23:25:29.135749    2359 log.go:172] (0xc0009d53f0) (0xc0005bf4a0) Stream removed, broadcasting: 5\n"
Aug 20 23:25:29.141: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 20 23:25:29.141: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 20 23:25:29.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:25:29.330: INFO: stderr: "I0820 23:25:29.265142    2380 log.go:172] (0xc000a36000) (0xc0006d06e0) Create stream\nI0820 23:25:29.265193    2380 log.go:172] (0xc000a36000) (0xc0006d06e0) Stream added, broadcasting: 1\nI0820 23:25:29.266711    2380 log.go:172] (0xc000a36000) Reply frame received for 1\nI0820 23:25:29.266755    2380 log.go:172] (0xc000a36000) (0xc0004894a0) Create stream\nI0820 23:25:29.266768    2380 log.go:172] (0xc000a36000) (0xc0004894a0) Stream added, broadcasting: 3\nI0820 23:25:29.267599    2380 log.go:172] (0xc000a36000) Reply frame received for 3\nI0820 23:25:29.267627    2380 log.go:172] (0xc000a36000) (0xc000438000) Create stream\nI0820 23:25:29.267636    2380 log.go:172] (0xc000a36000) (0xc000438000) Stream added, broadcasting: 5\nI0820 23:25:29.268216    2380 log.go:172] (0xc000a36000) Reply frame received for 5\nI0820 23:25:29.319287    2380 log.go:172] (0xc000a36000) Data frame received for 3\nI0820 23:25:29.319331    2380 log.go:172] (0xc0004894a0) (3) Data frame handling\nI0820 23:25:29.319352    2380 log.go:172] (0xc0004894a0) (3) Data frame sent\nI0820 23:25:29.319369    2380 log.go:172] (0xc000a36000) Data frame received for 3\nI0820 23:25:29.319379    2380 log.go:172] (0xc0004894a0) (3) Data frame handling\nI0820 23:25:29.319415    2380 log.go:172] (0xc000a36000) Data frame received for 5\nI0820 23:25:29.319444    2380 log.go:172] (0xc000438000) (5) Data frame handling\nI0820 23:25:29.319464    2380 log.go:172] (0xc000438000) (5) Data frame sent\nI0820 23:25:29.319476    2380 log.go:172] (0xc000a36000) Data frame received for 5\nI0820 23:25:29.319487    2380 log.go:172] (0xc000438000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0820 23:25:29.320957    2380 log.go:172] (0xc000a36000) Data frame received for 1\nI0820 23:25:29.321000    2380 log.go:172] (0xc0006d06e0) (1) Data frame handling\nI0820 23:25:29.321015    2380 log.go:172] (0xc0006d06e0) (1) Data frame sent\nI0820 23:25:29.321138    2380 log.go:172] (0xc000a36000) (0xc0006d06e0) Stream removed, broadcasting: 1\nI0820 23:25:29.321184    2380 log.go:172] (0xc000a36000) Go away received\nI0820 23:25:29.321761    2380 log.go:172] (0xc000a36000) (0xc0006d06e0) Stream removed, broadcasting: 1\nI0820 23:25:29.321808    2380 log.go:172] (0xc000a36000) (0xc0004894a0) Stream removed, broadcasting: 3\nI0820 23:25:29.321823    2380 log.go:172] (0xc000a36000) (0xc000438000) Stream removed, broadcasting: 5\n"
Aug 20 23:25:29.330: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 20 23:25:29.330: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 20 23:25:29.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:25:29.336: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:25:29.336: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 20 23:25:29.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 20 23:25:29.752: INFO: stderr: "I0820 23:25:29.676156    2400 log.go:172] (0xc0000f6f20) (0xc0006c7ae0) Create stream\nI0820 23:25:29.676207    2400 log.go:172] (0xc0000f6f20) (0xc0006c7ae0) Stream added, broadcasting: 1\nI0820 23:25:29.678732    2400 log.go:172] (0xc0000f6f20) Reply frame received for 1\nI0820 23:25:29.678779    2400 log.go:172] (0xc0000f6f20) (0xc0008e2000) Create stream\nI0820 23:25:29.678792    2400 log.go:172] (0xc0000f6f20) (0xc0008e2000) Stream added, broadcasting: 3\nI0820 23:25:29.679771    2400 log.go:172] (0xc0000f6f20) Reply frame received for 3\nI0820 23:25:29.679813    2400 log.go:172] (0xc0000f6f20) (0xc0008d0000) Create stream\nI0820 23:25:29.679830    2400 log.go:172] (0xc0000f6f20) (0xc0008d0000) Stream added, broadcasting: 5\nI0820 23:25:29.680867    2400 log.go:172] (0xc0000f6f20) Reply frame received for 5\nI0820 23:25:29.740296    2400 log.go:172] (0xc0000f6f20) Data frame received for 3\nI0820 23:25:29.740365    2400 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0820 23:25:29.740394    2400 log.go:172] (0xc0008e2000) (3) Data frame sent\nI0820 23:25:29.740426    2400 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0820 23:25:29.740444    2400 log.go:172] (0xc0008d0000) (5) Data frame handling\nI0820 23:25:29.740451    2400 log.go:172] (0xc0008d0000) (5) Data frame sent\nI0820 23:25:29.740457    2400 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0820 23:25:29.740462    2400 log.go:172] (0xc0008d0000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0820 23:25:29.740483    2400 log.go:172] (0xc0000f6f20) Data frame received for 3\nI0820 23:25:29.740492    2400 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0820 23:25:29.741944    2400 log.go:172] (0xc0000f6f20) Data frame received for 1\nI0820 23:25:29.741964    2400 log.go:172] (0xc0006c7ae0) (1) Data frame handling\nI0820 23:25:29.741971    2400 log.go:172] (0xc0006c7ae0) (1) Data frame sent\nI0820 23:25:29.742109    2400 log.go:172] (0xc0000f6f20) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0820 23:25:29.742490    2400 log.go:172] (0xc0000f6f20) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0820 23:25:29.742514    2400 log.go:172] (0xc0000f6f20) (0xc0008e2000) Stream removed, broadcasting: 3\nI0820 23:25:29.742527    2400 log.go:172] (0xc0000f6f20) (0xc0008d0000) Stream removed, broadcasting: 5\n"
Aug 20 23:25:29.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 20 23:25:29.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 20 23:25:29.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 20 23:25:30.094: INFO: stderr: "I0820 23:25:29.892660    2421 log.go:172] (0xc000b2f290) (0xc0009b6640) Create stream\nI0820 23:25:29.892862    2421 log.go:172] (0xc000b2f290) (0xc0009b6640) Stream added, broadcasting: 1\nI0820 23:25:29.898163    2421 log.go:172] (0xc000b2f290) Reply frame received for 1\nI0820 23:25:29.898227    2421 log.go:172] (0xc000b2f290) (0xc0006f2640) Create stream\nI0820 23:25:29.898248    2421 log.go:172] (0xc000b2f290) (0xc0006f2640) Stream added, broadcasting: 3\nI0820 23:25:29.899380    2421 log.go:172] (0xc000b2f290) Reply frame received for 3\nI0820 23:25:29.899418    2421 log.go:172] (0xc000b2f290) (0xc000521400) Create stream\nI0820 23:25:29.899435    2421 log.go:172] (0xc000b2f290) (0xc000521400) Stream added, broadcasting: 5\nI0820 23:25:29.900345    2421 log.go:172] (0xc000b2f290) Reply frame received for 5\nI0820 23:25:29.979363    2421 log.go:172] (0xc000b2f290) Data frame received for 5\nI0820 23:25:29.979385    2421 log.go:172] (0xc000521400) (5) Data frame handling\nI0820 23:25:29.979401    2421 log.go:172] (0xc000521400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0820 23:25:30.079987    2421 log.go:172] (0xc000b2f290) Data frame received for 3\nI0820 23:25:30.080038    2421 log.go:172] (0xc0006f2640) (3) Data frame handling\nI0820 23:25:30.080079    2421 log.go:172] (0xc0006f2640) (3) Data frame sent\nI0820 23:25:30.080100    2421 log.go:172] (0xc000b2f290) Data frame received for 3\nI0820 23:25:30.080117    2421 log.go:172] (0xc0006f2640) (3) Data frame handling\nI0820 23:25:30.080376    2421 log.go:172] (0xc000b2f290) Data frame received for 5\nI0820 23:25:30.080403    2421 log.go:172] (0xc000521400) (5) Data frame handling\nI0820 23:25:30.082773    2421 log.go:172] (0xc000b2f290) Data frame received for 1\nI0820 23:25:30.082803    2421 log.go:172] (0xc0009b6640) (1) Data frame handling\nI0820 23:25:30.082817    2421 log.go:172] (0xc0009b6640) (1) Data frame sent\nI0820 23:25:30.082839    2421 log.go:172] (0xc000b2f290) (0xc0009b6640) Stream removed, broadcasting: 1\nI0820 23:25:30.083242    2421 log.go:172] (0xc000b2f290) (0xc0009b6640) Stream removed, broadcasting: 1\nI0820 23:25:30.083275    2421 log.go:172] (0xc000b2f290) (0xc0006f2640) Stream removed, broadcasting: 3\nI0820 23:25:30.083288    2421 log.go:172] (0xc000b2f290) (0xc000521400) Stream removed, broadcasting: 5\n"
Aug 20 23:25:30.094: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 20 23:25:30.094: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 20 23:25:30.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 20 23:25:30.353: INFO: stderr: "I0820 23:25:30.213708    2441 log.go:172] (0xc000b0a000) (0xc0006b7ae0) Create stream\nI0820 23:25:30.213787    2441 log.go:172] (0xc000b0a000) (0xc0006b7ae0) Stream added, broadcasting: 1\nI0820 23:25:30.216978    2441 log.go:172] (0xc000b0a000) Reply frame received for 1\nI0820 23:25:30.217030    2441 log.go:172] (0xc000b0a000) (0xc0006706e0) Create stream\nI0820 23:25:30.217046    2441 log.go:172] (0xc000b0a000) (0xc0006706e0) Stream added, broadcasting: 3\nI0820 23:25:30.218323    2441 log.go:172] (0xc000b0a000) Reply frame received for 3\nI0820 23:25:30.218367    2441 log.go:172] (0xc000b0a000) (0xc0006b7cc0) Create stream\nI0820 23:25:30.218378    2441 log.go:172] (0xc000b0a000) (0xc0006b7cc0) Stream added, broadcasting: 5\nI0820 23:25:30.219535    2441 log.go:172] (0xc000b0a000) Reply frame received for 5\nI0820 23:25:30.291590    2441 log.go:172] (0xc000b0a000) Data frame received for 5\nI0820 23:25:30.291611    2441 log.go:172] (0xc0006b7cc0) (5) Data frame handling\nI0820 23:25:30.291623    2441 log.go:172] (0xc0006b7cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0820 23:25:30.340849    2441 log.go:172] (0xc000b0a000) Data frame received for 3\nI0820 23:25:30.340888    2441 log.go:172] (0xc0006706e0) (3) Data frame handling\nI0820 23:25:30.340904    2441 log.go:172] (0xc0006706e0) (3) Data frame sent\nI0820 23:25:30.340916    2441 log.go:172] (0xc000b0a000) Data frame received for 3\nI0820 23:25:30.340926    2441 log.go:172] (0xc0006706e0) (3) Data frame handling\nI0820 23:25:30.341087    2441 log.go:172] (0xc000b0a000) Data frame received for 5\nI0820 23:25:30.341122    2441 log.go:172] (0xc0006b7cc0) (5) Data frame handling\nI0820 23:25:30.343606    2441 log.go:172] (0xc000b0a000) Data frame received for 1\nI0820 23:25:30.343634    2441 log.go:172] (0xc0006b7ae0) (1) Data frame handling\nI0820 23:25:30.343654    2441 log.go:172] (0xc0006b7ae0) (1) Data frame sent\nI0820 23:25:30.343690    2441 log.go:172] (0xc000b0a000) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0820 23:25:30.343710    2441 log.go:172] (0xc000b0a000) Go away received\nI0820 23:25:30.344030    2441 log.go:172] (0xc000b0a000) (0xc0006b7ae0) Stream removed, broadcasting: 1\nI0820 23:25:30.344050    2441 log.go:172] (0xc000b0a000) (0xc0006706e0) Stream removed, broadcasting: 3\nI0820 23:25:30.344057    2441 log.go:172] (0xc000b0a000) (0xc0006b7cc0) Stream removed, broadcasting: 5\n"
Aug 20 23:25:30.353: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 20 23:25:30.353: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 20 23:25:30.353: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 23:25:30.355: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 20 23:25:40.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 23:25:40.364: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 23:25:40.364: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 23:25:40.439: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:40.439: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:40.439: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:40.439: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:40.439: INFO: 
Aug 20 23:25:40.439: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:41.528: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:41.528: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:41.528: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:41.528: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:41.528: INFO: 
Aug 20 23:25:41.528: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:42.533: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:42.533: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:42.533: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:42.533: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:42.533: INFO: 
Aug 20 23:25:42.533: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:43.538: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:43.538: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:43.538: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:43.538: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:43.538: INFO: 
Aug 20 23:25:43.538: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:44.542: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:44.542: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:44.542: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:44.542: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:44.542: INFO: 
Aug 20 23:25:44.542: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:45.546: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:45.546: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:45.546: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:45.546: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:45.546: INFO: 
Aug 20 23:25:45.546: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:46.551: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:46.551: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:46.551: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:46.551: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:46.551: INFO: 
Aug 20 23:25:46.551: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:47.556: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:47.556: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:47.556: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:47.556: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:47.556: INFO: 
Aug 20 23:25:47.556: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:48.561: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:48.561: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:48.561: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:48.561: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:48.561: INFO: 
Aug 20 23:25:48.561: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 20 23:25:49.624: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 20 23:25:49.624: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:24:58 +0000 UTC  }]
Aug 20 23:25:49.624: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:49.624: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 23:25:18 +0000 UTC  }]
Aug 20 23:25:49.624: INFO: 
Aug 20 23:25:49.624: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8591
Aug 20 23:25:50.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:25:50.751: INFO: rc: 1
Aug 20 23:25:50.751: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 20 23:26:00.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:00.873: INFO: rc: 1
Aug 20 23:26:00.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:26:10.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:10.980: INFO: rc: 1
Aug 20 23:26:10.981: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:26:20.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:21.084: INFO: rc: 1
Aug 20 23:26:21.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:26:31.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:31.191: INFO: rc: 1
Aug 20 23:26:31.192: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:26:41.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:41.294: INFO: rc: 1
Aug 20 23:26:41.294: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:26:51.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:26:51.401: INFO: rc: 1
Aug 20 23:26:51.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:01.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:01.514: INFO: rc: 1
Aug 20 23:27:01.514: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:11.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:11.627: INFO: rc: 1
Aug 20 23:27:11.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:21.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:21.725: INFO: rc: 1
Aug 20 23:27:21.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:31.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:31.827: INFO: rc: 1
Aug 20 23:27:31.827: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:41.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:41.942: INFO: rc: 1
Aug 20 23:27:41.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:27:51.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:27:52.037: INFO: rc: 1
Aug 20 23:27:52.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:02.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:02.139: INFO: rc: 1
Aug 20 23:28:02.139: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:12.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:12.243: INFO: rc: 1
Aug 20 23:28:12.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:22.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:22.334: INFO: rc: 1
Aug 20 23:28:22.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:32.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:32.427: INFO: rc: 1
Aug 20 23:28:32.428: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:42.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:42.526: INFO: rc: 1
Aug 20 23:28:42.526: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:28:52.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:28:52.635: INFO: rc: 1
Aug 20 23:28:52.636: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:02.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:02.741: INFO: rc: 1
Aug 20 23:29:02.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:12.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:12.848: INFO: rc: 1
Aug 20 23:29:12.848: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:22.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:22.953: INFO: rc: 1
Aug 20 23:29:22.953: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:32.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:33.052: INFO: rc: 1
Aug 20 23:29:33.052: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:43.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:43.149: INFO: rc: 1
Aug 20 23:29:43.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:29:53.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:29:53.252: INFO: rc: 1
Aug 20 23:29:53.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:03.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:03.357: INFO: rc: 1
Aug 20 23:30:03.357: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:13.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:13.467: INFO: rc: 1
Aug 20 23:30:13.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:23.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:23.574: INFO: rc: 1
Aug 20 23:30:23.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:33.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:33.672: INFO: rc: 1
Aug 20 23:30:33.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:43.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:43.778: INFO: rc: 1
Aug 20 23:30:43.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 20 23:30:53.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8591 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:30:53.878: INFO: rc: 1
Aug 20 23:30:53.878: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Aug 20 23:30:53.879: INFO: Scaling statefulset ss to 0
Aug 20 23:30:53.887: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 20 23:30:53.890: INFO: Deleting all statefulset in ns statefulset-8591
Aug 20 23:30:53.892: INFO: Scaling statefulset ss to 0
Aug 20 23:30:53.899: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 23:30:53.902: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:30:53.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8591" for this suite.

• [SLOW TEST:355.592 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":162,"skipped":2619,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:30:53.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:30:54.580: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 20 23:30:56.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563054, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563054, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563054, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563054, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:30:59.739: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:30:59.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:00.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1673" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:7.109 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":163,"skipped":2624,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:01.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 20 23:31:01.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4811" for this suite.

• [SLOW TEST:15.021 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":164,"skipped":2637,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:16.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:31:16.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:31:18.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:20.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563076, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:31:23.646: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 20 23:31:23.668: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:23.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7847" for this suite.
STEP: Destroying namespace "webhook-7847-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.794 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":165,"skipped":2660,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:23.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-06328b6d-b351-4fb4-ae69-d3971161dfea
STEP: Creating configMap with name cm-test-opt-upd-35b6bdca-2916-4c1d-86f5-843aac58f370
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-06328b6d-b351-4fb4-ae69-d3971161dfea
STEP: Updating configmap cm-test-opt-upd-35b6bdca-2916-4c1d-86f5-843aac58f370
STEP: Creating configMap with name cm-test-opt-create-ef54eda8-9129-4e4d-9e33-04e694c8c5d1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:32.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2382" for this suite.

• [SLOW TEST:8.308 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2668,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:32.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:31:32.249: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 20 23:31:37.255: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 20 23:31:37.255: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 20 23:31:39.259: INFO: Creating deployment "test-rollover-deployment"
Aug 20 23:31:39.400: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 20 23:31:41.407: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 20 23:31:41.414: INFO: Ensure that both replica sets have 1 created replica
Aug 20 23:31:41.419: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 20 23:31:41.426: INFO: Updating deployment test-rollover-deployment
Aug 20 23:31:41.426: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 20 23:31:43.437: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 20 23:31:43.443: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 20 23:31:43.450: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:43.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563101, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:45.457: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:45.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:47.458: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:47.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:49.458: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:49.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:51.456: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:51.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:53.457: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 23:31:53.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563099, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:31:55.456: INFO: 
Aug 20 23:31:55.456: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 20 23:31:55.464: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-8107 /apis/apps/v1/namespaces/deployment-8107/deployments/test-rollover-deployment 5d949f16-862c-4a54-a967-4edae07e1b8b 1955893 2 2020-08-20 23:31:39 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b101b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-20 23:31:39 +0000 UTC,LastTransitionTime:2020-08-20 23:31:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-20 23:31:54 +0000 UTC,LastTransitionTime:2020-08-20 23:31:39 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 20 23:31:55.467: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-8107 /apis/apps/v1/namespaces/deployment-8107/replicasets/test-rollover-deployment-574d6dfbff 004dad2e-40f4-40e9-b95c-376e5ac9794b 1955883 2 2020-08-20 23:31:41 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 5d949f16-862c-4a54-a967-4edae07e1b8b 0xc003a7be17 0xc003a7be18}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a7be98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:31:55.467: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 20 23:31:55.467: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-8107 /apis/apps/v1/namespaces/deployment-8107/replicasets/test-rollover-controller 1801828e-ddcf-4275-85b9-9dda24b6072d 1955892 2 2020-08-20 23:31:32 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 5d949f16-862c-4a54-a967-4edae07e1b8b 0xc003a7bd0f 0xc003a7bd20}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003a7bd98  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:31:55.467: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-8107 /apis/apps/v1/namespaces/deployment-8107/replicasets/test-rollover-deployment-f6c94f66c 034a6394-74c2-4147-afe5-3c7253bebc49 1955828 2 2020-08-20 23:31:39 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 5d949f16-862c-4a54-a967-4edae07e1b8b 0xc003a7bf10 0xc003a7bf11}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a7bf98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:31:55.470: INFO: Pod "test-rollover-deployment-574d6dfbff-tzdzg" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-tzdzg test-rollover-deployment-574d6dfbff- deployment-8107 /api/v1/namespaces/deployment-8107/pods/test-rollover-deployment-574d6dfbff-tzdzg a61c270c-0d50-4635-9e11-a395c98d7b25 1955848 0 2020-08-20 23:31:41 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 004dad2e-40f4-40e9-b95c-376e5ac9794b 0xc003b105d7 0xc003b105d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t6gs6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t6gs6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t6gs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:31:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:31:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:31:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.58,StartTime:2020-08-20 23:31:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:31:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b33ab14e645af0df0966639cd93bbac696b412d84ec1d68b431e40290342f59d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:55.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8107" for this suite.

• [SLOW TEST:23.317 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":167,"skipped":2679,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:55.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 20 23:31:55.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877" in namespace "projected-7001" to be "success or failure"
Aug 20 23:31:55.600: INFO: Pod "downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877": Phase="Pending", Reason="", readiness=false. Elapsed: 30.753719ms
Aug 20 23:31:57.604: INFO: Pod "downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035072392s
Aug 20 23:31:59.608: INFO: Pod "downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039202801s
STEP: Saw pod success
Aug 20 23:31:59.608: INFO: Pod "downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877" satisfied condition "success or failure"
Aug 20 23:31:59.611: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877 container client-container: 
STEP: delete the pod
Aug 20 23:31:59.628: INFO: Waiting for pod downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877 to disappear
Aug 20 23:31:59.633: INFO: Pod downwardapi-volume-2db2f3e3-da04-4f91-91d4-9850aec42877 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:31:59.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7001" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2687,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:31:59.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 20 23:31:59.865: INFO: Waiting up to 5m0s for pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc" in namespace "containers-1981" to be "success or failure"
Aug 20 23:31:59.987: INFO: Pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 121.910168ms
Aug 20 23:32:01.990: INFO: Pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124876913s
Aug 20 23:32:03.994: INFO: Pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc": Phase="Running", Reason="", readiness=true. Elapsed: 4.12900265s
Aug 20 23:32:05.998: INFO: Pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132958964s
STEP: Saw pod success
Aug 20 23:32:05.998: INFO: Pod "client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc" satisfied condition "success or failure"
Aug 20 23:32:06.000: INFO: Trying to get logs from node jerma-worker pod client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc container test-container: 
STEP: delete the pod
Aug 20 23:32:06.031: INFO: Waiting for pod client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc to disappear
Aug 20 23:32:06.038: INFO: Pod client-containers-d9ec77d7-056e-46cc-a429-74130e702dfc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:06.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1981" for this suite.

• [SLOW TEST:6.404 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2691,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:06.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:32:28.093: INFO: Container started at 2020-08-20 23:32:08 +0000 UTC, pod became ready at 2020-08-20 23:32:26 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:28.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-49" for this suite.

• [SLOW TEST:22.056 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:28.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-441af457-f300-4b3f-97d8-c5d19c67a12a
STEP: Creating a pod to test consume configMaps
Aug 20 23:32:28.223: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd" in namespace "projected-747" to be "success or failure"
Aug 20 23:32:28.254: INFO: Pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.662778ms
Aug 20 23:32:30.258: INFO: Pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035216602s
Aug 20 23:32:32.263: INFO: Pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.039791285s
Aug 20 23:32:34.267: INFO: Pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044043386s
STEP: Saw pod success
Aug 20 23:32:34.267: INFO: Pod "pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd" satisfied condition "success or failure"
Aug 20 23:32:34.271: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd container projected-configmap-volume-test: 
STEP: delete the pod
Aug 20 23:32:34.330: INFO: Waiting for pod pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd to disappear
Aug 20 23:32:34.343: INFO: Pod pod-projected-configmaps-3993cc28-0fc2-4967-a55c-769de44ef7dd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:34.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-747" for this suite.

• [SLOW TEST:6.249 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2718,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:34.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 20 23:32:34.447: INFO: Waiting up to 5m0s for pod "var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98" in namespace "var-expansion-9868" to be "success or failure"
Aug 20 23:32:34.451: INFO: Pod "var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.82437ms
Aug 20 23:32:36.456: INFO: Pod "var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008726461s
Aug 20 23:32:38.460: INFO: Pod "var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012824526s
STEP: Saw pod success
Aug 20 23:32:38.460: INFO: Pod "var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98" satisfied condition "success or failure"
Aug 20 23:32:38.463: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98 container dapi-container: 
STEP: delete the pod
Aug 20 23:32:38.519: INFO: Waiting for pod var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98 to disappear
Aug 20 23:32:38.529: INFO: Pod var-expansion-d8a57107-fc0c-4f26-8a3d-db4b49c34f98 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:38.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9868" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2720,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:38.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:42.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1150" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2723,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:42.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-f5070747-a4f6-4021-9c3f-36f73f4b5394
STEP: Creating a pod to test consume secrets
Aug 20 23:32:42.729: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49" in namespace "projected-1189" to be "success or failure"
Aug 20 23:32:42.733: INFO: Pod "pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013811ms
Aug 20 23:32:44.737: INFO: Pod "pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007903743s
Aug 20 23:32:46.800: INFO: Pod "pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070284721s
STEP: Saw pod success
Aug 20 23:32:46.800: INFO: Pod "pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49" satisfied condition "success or failure"
Aug 20 23:32:46.802: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49 container projected-secret-volume-test: 
STEP: delete the pod
Aug 20 23:32:47.064: INFO: Waiting for pod pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49 to disappear
Aug 20 23:32:47.071: INFO: Pod pod-projected-secrets-828429ea-f00a-4aa8-a8b2-0d5cb059ed49 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:32:47.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1189" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2736,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:32:47.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 20 23:32:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2849'
Aug 20 23:32:47.433: INFO: stderr: ""
Aug 20 23:32:47.433: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 23:32:47.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2849'
Aug 20 23:32:47.534: INFO: stderr: ""
Aug 20 23:32:47.534: INFO: stdout: "update-demo-nautilus-5qkwz update-demo-nautilus-ntvkm "
Aug 20 23:32:47.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qkwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:32:47.614: INFO: stderr: ""
Aug 20 23:32:47.614: INFO: stdout: ""
Aug 20 23:32:47.614: INFO: update-demo-nautilus-5qkwz is created but not running
Aug 20 23:32:52.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2849'
Aug 20 23:32:52.730: INFO: stderr: ""
Aug 20 23:32:52.730: INFO: stdout: "update-demo-nautilus-5qkwz update-demo-nautilus-ntvkm "
Aug 20 23:32:52.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qkwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:32:52.910: INFO: stderr: ""
Aug 20 23:32:52.910: INFO: stdout: "true"
Aug 20 23:32:52.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qkwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:32:53.113: INFO: stderr: ""
Aug 20 23:32:53.113: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 23:32:53.113: INFO: validating pod update-demo-nautilus-5qkwz
Aug 20 23:32:53.119: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 23:32:53.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 23:32:53.119: INFO: update-demo-nautilus-5qkwz is verified up and running
Aug 20 23:32:53.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntvkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:32:53.209: INFO: stderr: ""
Aug 20 23:32:53.209: INFO: stdout: "true"
Aug 20 23:32:53.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntvkm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:32:53.304: INFO: stderr: ""
Aug 20 23:32:53.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 23:32:53.304: INFO: validating pod update-demo-nautilus-ntvkm
Aug 20 23:32:53.308: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 23:32:53.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 23:32:53.308: INFO: update-demo-nautilus-ntvkm is verified up and running
STEP: rolling-update to new replication controller
Aug 20 23:32:53.311: INFO: scanned /root for discovery docs: 
Aug 20 23:32:53.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2849'
Aug 20 23:33:18.263: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 20 23:33:18.263: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 23:33:18.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2849'
Aug 20 23:33:18.362: INFO: stderr: ""
Aug 20 23:33:18.362: INFO: stdout: "update-demo-kitten-lk6zn update-demo-kitten-wgnzz "
Aug 20 23:33:18.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lk6zn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:33:18.551: INFO: stderr: ""
Aug 20 23:33:18.551: INFO: stdout: "true"
Aug 20 23:33:18.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lk6zn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:33:18.640: INFO: stderr: ""
Aug 20 23:33:18.640: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 20 23:33:18.640: INFO: validating pod update-demo-kitten-lk6zn
Aug 20 23:33:18.643: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 20 23:33:18.643: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 20 23:33:18.643: INFO: update-demo-kitten-lk6zn is verified up and running
Aug 20 23:33:18.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wgnzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:33:18.732: INFO: stderr: ""
Aug 20 23:33:18.732: INFO: stdout: "true"
Aug 20 23:33:18.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wgnzz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2849'
Aug 20 23:33:18.829: INFO: stderr: ""
Aug 20 23:33:18.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 20 23:33:18.829: INFO: validating pod update-demo-kitten-wgnzz
Aug 20 23:33:18.833: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 20 23:33:18.833: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 20 23:33:18.833: INFO: update-demo-kitten-wgnzz is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:33:18.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2849" for this suite.

• [SLOW TEST:31.762 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":175,"skipped":2744,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:33:18.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:33:18.885: INFO: Creating deployment "webserver-deployment"
Aug 20 23:33:18.898: INFO: Waiting for observed generation 1
Aug 20 23:33:21.003: INFO: Waiting for all required pods to come up
Aug 20 23:33:21.006: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 20 23:33:31.112: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 20 23:33:31.117: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 20 23:33:31.123: INFO: Updating deployment webserver-deployment
Aug 20 23:33:31.123: INFO: Waiting for observed generation 2
Aug 20 23:33:33.659: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 20 23:33:34.465: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 20 23:33:34.467: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 20 23:33:35.001: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 20 23:33:35.001: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 20 23:33:35.016: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 20 23:33:35.020: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 20 23:33:35.020: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 20 23:33:35.026: INFO: Updating deployment webserver-deployment
Aug 20 23:33:35.026: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 20 23:33:35.174: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 20 23:33:37.390: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 20 23:33:37.842: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-3050 /apis/apps/v1/namespaces/deployment-3050/deployments/webserver-deployment b8e2e86a-3726-4aeb-a29d-c125174eb933 1956757 3 2020-08-20 23:33:18 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f60548  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-20 23:33:35 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-20 23:33:35 +0000 UTC,LastTransitionTime:2020-08-20 23:33:18 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 20 23:33:37.980: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-3050 /apis/apps/v1/namespaces/deployment-3050/replicasets/webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 1956750 3 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b8e2e86a-3726-4aeb-a29d-c125174eb933 0xc0046bf597 0xc0046bf598}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046bf608  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:33:37.980: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 20 23:33:37.981: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-3050 /apis/apps/v1/namespaces/deployment-3050/replicasets/webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 1956747 3 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b8e2e86a-3726-4aeb-a29d-c125174eb933 0xc0046bf4d7 0xc0046bf4d8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046bf538  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:33:38.031: INFO: Pod "webserver-deployment-595b5b9587-48t89" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-48t89 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-48t89 66c06286-2362-41c3-a122-8409fb752add 1956788 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc0046bfad7 0xc0046bfad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.031: INFO: Pod "webserver-deployment-595b5b9587-5psz4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5psz4 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-5psz4 60ba61a5-e4d1-4c99-b45d-8d0a1cf040cf 1956563 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc0046bfc37 0xc0046bfc38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.59,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da7c60cb08c5bf330b316fcd87dd995b31f7d10b5296f79eb6be7a13a9ccb107,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.031: INFO: Pod "webserver-deployment-595b5b9587-8blcf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8blcf webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-8blcf b1074b6e-cac7-498e-8c1f-969a51b9057e 1956561 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc0046bfdb7 0xc0046bfdb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.65,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://471b8c8badf03f3010d5b5380787cf958cb913d34299e7230dc0f55922b2e42d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.032: INFO: Pod "webserver-deployment-595b5b9587-bslhf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bslhf webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-bslhf 54c3add0-55b5-4056-b016-2e33f758a13e 1956572 0 2020-08-20 23:33:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc0046bff37 0xc0046bff38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.61,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://53fdf087428937d715a0399d93f9f7c03fa34d91bc76b7aee9ac3ea1237188d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.032: INFO: Pod "webserver-deployment-595b5b9587-gfxz4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gfxz4 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-gfxz4 745bc2a5-20bf-404b-aa46-a1852e1d1e9b 1956577 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c0c7 0xc00353c0c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.60,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://97f410ec016b51faf59abd3f5c6463cc7f5ff20fdc723061903e708967d43a43,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.032: INFO: Pod "webserver-deployment-595b5b9587-gsf2p" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gsf2p webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-gsf2p 2f309b9a-b71c-428b-aa64-33a5514b5e6a 1956518 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c247 0xc00353c248}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.58,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1f2a1e5d72a49d409eac0366c06aa7bf6f2a9cc9ebfca40d29f237c339e4b8c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.033: INFO: Pod "webserver-deployment-595b5b9587-jrfd7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrfd7 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-jrfd7 486d791c-11d9-47f3-a23b-afedcc2f42c9 1956787 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c3c7 0xc00353c3c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.033: INFO: Pod "webserver-deployment-595b5b9587-lvw9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lvw9l webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-lvw9l 3ccff776-33e8-4fd4-8fb9-530bf92767c7 1956754 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c527 0xc00353c528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.033: INFO: Pod "webserver-deployment-595b5b9587-pxtcm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pxtcm webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-pxtcm f8c50a5b-5edd-4acb-bef8-f032f5ee6a87 1956762 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c687 0xc00353c688}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.034: INFO: Pod "webserver-deployment-595b5b9587-qxfqd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qxfqd webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-qxfqd c65b33e2-d2f7-4287-9338-d1c334387a8a 1956587 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c7f7 0xc00353c7f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.62,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d32b5027a97c19bb82d0c023d45c2c9f4ab3ec4cac8da3057a3b66f3b65bfb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.034: INFO: Pod "webserver-deployment-595b5b9587-s2ccl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s2ccl webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-s2ccl 4e5fde15-50fb-44e6-830e-44653e27b454 1956792 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353c977 0xc00353c978}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.034: INFO: Pod "webserver-deployment-595b5b9587-s597c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s597c webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-s597c 6fee57fd-9e2f-4c00-9c97-4f9697e12f17 1956742 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353cad7 0xc00353cad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.035: INFO: Pod "webserver-deployment-595b5b9587-smdw8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-smdw8 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-smdw8 31902e21-8217-4516-bd94-ebbdc2125b37 1956515 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353cc37 0xc00353cc38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.64,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e4d408781a72d7bf98f7775004faf907a04c16733538f8f2342809cc547a3f0d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.035: INFO: Pod "webserver-deployment-595b5b9587-t4xdd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t4xdd webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-t4xdd 490898d4-c296-4d30-9513-08c0febad0f8 1956755 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353cdb7 0xc00353cdb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.035: INFO: Pod "webserver-deployment-595b5b9587-t7glb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t7glb webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-t7glb 29e9a7bb-db72-4776-a806-47feffd13ff3 1956779 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353cf17 0xc00353cf18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.036: INFO: Pod "webserver-deployment-595b5b9587-txjmk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-txjmk webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-txjmk 4d633417-5941-4b4d-8fb0-ec1917eefcae 1956763 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353d077 0xc00353d078}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.036: INFO: Pod "webserver-deployment-595b5b9587-wfl7q" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wfl7q webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-wfl7q f6ce8968-a8dd-4904-82f6-5cc81a6e30b8 1956780 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353d1d7 0xc00353d1d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.036: INFO: Pod "webserver-deployment-595b5b9587-wkqq5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wkqq5 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-wkqq5 58f16cc6-a38e-4b22-8145-73fdaf520778 1956820 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353d337 0xc00353d338}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.037: INFO: Pod "webserver-deployment-595b5b9587-xd5s8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xd5s8 webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-xd5s8 b5950d58-edca-4eec-aee3-d117749257d1 1956591 0 2020-08-20 23:33:18 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353d497 0xc00353d498}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.66,StartTime:2020-08-20 23:33:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:33:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f8e38dd1fd39d3a75ee122b2a59f9ccc1acebe849685648542e1163fd8f8e9f0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.037: INFO: Pod "webserver-deployment-595b5b9587-zcb6f" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zcb6f webserver-deployment-595b5b9587- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-595b5b9587-zcb6f fe5ed243-10b8-4fe8-9246-5471004d462e 1956815 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 69f3b946-8fda-46d1-bfdb-fad227aaa0da 0xc00353d617 0xc00353d618}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.037: INFO: Pod "webserver-deployment-c7997dcc8-4lx9t" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4lx9t webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-4lx9t 5a828ba0-0c14-487b-88c5-31d5f0ec04d0 1956680 0 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353d777 0xc00353d778}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.038: INFO: Pod "webserver-deployment-c7997dcc8-4t8gb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4t8gb webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-4t8gb 4ca6c2f5-b149-47a4-b2bb-7d6d6e1f054d 1956769 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353d8f7 0xc00353d8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.038: INFO: Pod "webserver-deployment-c7997dcc8-8jzh6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8jzh6 webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-8jzh6 08a60a1b-7aeb-4493-ba0c-7cedf134cf2c 1956830 0 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353da97 0xc00353da98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.64,StartTime:2020-08-20 23:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.038: INFO: Pod "webserver-deployment-c7997dcc8-fmxtn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fmxtn webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-fmxtn f78be25d-66dc-48a8-a09b-fd1c59a1e1fe 1956770 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353dc47 0xc00353dc48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.039: INFO: Pod "webserver-deployment-c7997dcc8-j5txh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j5txh webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-j5txh 6b82bb70-e536-4778-b208-45a457c0c561 1956821 0 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353ddc7 0xc00353ddc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.63,StartTime:2020-08-20 23:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.039: INFO: Pod "webserver-deployment-c7997dcc8-mwcvs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mwcvs webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-mwcvs aa19f423-e894-4655-a707-dd2776873cb2 1956816 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc00353df77 0xc00353df78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.039: INFO: Pod "webserver-deployment-c7997dcc8-qtzgt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qtzgt webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-qtzgt 074a4e91-767d-405c-badf-746ba11b90d7 1956793 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007fe8c7 0xc0007fe8c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.039: INFO: Pod "webserver-deployment-c7997dcc8-qzw99" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qzw99 webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-qzw99 d0978dd3-7c0b-4791-bb30-b6ae23c3e112 1956799 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007febb7 0xc0007febb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.040: INFO: Pod "webserver-deployment-c7997dcc8-td6sm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-td6sm webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-td6sm e7529733-6431-4c37-9811-9d73a156f962 1956806 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007fee77 0xc0007fee78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.040: INFO: Pod "webserver-deployment-c7997dcc8-tdnxn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tdnxn webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-tdnxn f0140fde-5485-4523-9cf2-68422acbcd30 1956672 0 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007ff157 0xc0007ff158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.040: INFO: Pod "webserver-deployment-c7997dcc8-vtmsn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vtmsn webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-vtmsn cb92aa98-be39-4061-9fca-ab88859afe9a 1956825 0 2020-08-20 23:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007ff3f7 0xc0007ff3f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.69,StartTime:2020-08-20 23:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.040: INFO: Pod "webserver-deployment-c7997dcc8-zgcrd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zgcrd webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-zgcrd 5db4f39d-c635-4beb-812d-841ba819fe2a 1956832 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007ff737 0xc0007ff738}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 20 23:33:38.041: INFO: Pod "webserver-deployment-c7997dcc8-zvxhn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zvxhn webserver-deployment-c7997dcc8- deployment-3050 /api/v1/namespaces/deployment-3050/pods/webserver-deployment-c7997dcc8-zvxhn 76cf5c5a-f82f-428d-b706-3a570d609afd 1956810 0 2020-08-20 23:33:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4a0f48ea-a544-4892-bded-946ecf2bca07 0xc0007ff9e7 0xc0007ff9e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:33:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-20 23:33:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:33:38.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3050" for this suite.

• [SLOW TEST:19.209 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":176,"skipped":2754,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:33:38.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:33:41.059: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 20 23:33:43.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563220, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:33:46.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563220, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:33:47.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563220, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:33:49.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563220, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:33:51.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563221, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563220, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:33:54.572: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:33:54.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:33:57.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-929" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:20.207 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":177,"skipped":2791,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:33:58.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:34:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-25" for this suite.

• [SLOW TEST:9.987 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":178,"skipped":2794,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:34:08.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 20 23:34:12.918: INFO: Successfully updated pod "labelsupdatecb81a28c-291e-4067-ad4e-f28e35041010"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:34:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9857" for this suite.

• [SLOW TEST:6.700 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:34:14.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-d053672e-766d-46fe-9bf5-7054ffa79a16
STEP: Creating secret with name s-test-opt-upd-133df80e-ae34-44e3-a1de-e0751f365db6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-d053672e-766d-46fe-9bf5-7054ffa79a16
STEP: Updating secret s-test-opt-upd-133df80e-ae34-44e3-a1de-e0751f365db6
STEP: Creating secret with name s-test-opt-create-188d6300-e1f5-48d5-8431-a4ed818d6a10
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:35:57.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7478" for this suite.

• [SLOW TEST:102.267 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2875,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:35:57.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6550
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 20 23:35:57.298: INFO: Found 0 stateful pods, waiting for 3
Aug 20 23:36:07.303: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:07.303: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:07.303: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 20 23:36:17.303: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:17.303: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:17.303: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 20 23:36:17.330: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 20 23:36:27.386: INFO: Updating stateful set ss2
Aug 20 23:36:27.392: INFO: Waiting for Pod statefulset-6550/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 20 23:36:37.400: INFO: Waiting for Pod statefulset-6550/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 20 23:36:47.981: INFO: Found 2 stateful pods, waiting for 3
Aug 20 23:36:57.986: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:57.986: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:36:57.986: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 20 23:36:58.007: INFO: Updating stateful set ss2
Aug 20 23:36:58.071: INFO: Waiting for Pod statefulset-6550/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 20 23:37:08.081: INFO: Waiting for Pod statefulset-6550/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 20 23:37:18.097: INFO: Updating stateful set ss2
Aug 20 23:37:18.122: INFO: Waiting for StatefulSet statefulset-6550/ss2 to complete update
Aug 20 23:37:18.122: INFO: Waiting for Pod statefulset-6550/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 20 23:37:28.146: INFO: Waiting for StatefulSet statefulset-6550/ss2 to complete update
Aug 20 23:37:28.146: INFO: Waiting for Pod statefulset-6550/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 20 23:37:38.131: INFO: Deleting all statefulset in ns statefulset-6550
Aug 20 23:37:38.134: INFO: Scaling statefulset ss2 to 0
Aug 20 23:38:18.174: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 23:38:18.177: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:18.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6550" for this suite.

• [SLOW TEST:141.004 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":181,"skipped":2885,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:18.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 20 23:38:24.493: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-699 PodName:pod-sharedvolume-bdd827a9-9092-4386-8db2-3454153becba ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 23:38:24.493: INFO: >>> kubeConfig: /root/.kube/config
I0820 23:38:24.532927       6 log.go:172] (0xc002dee4d0) (0xc001383ea0) Create stream
I0820 23:38:24.532959       6 log.go:172] (0xc002dee4d0) (0xc001383ea0) Stream added, broadcasting: 1
I0820 23:38:24.535151       6 log.go:172] (0xc002dee4d0) Reply frame received for 1
I0820 23:38:24.535195       6 log.go:172] (0xc002dee4d0) (0xc002e503c0) Create stream
I0820 23:38:24.535212       6 log.go:172] (0xc002dee4d0) (0xc002e503c0) Stream added, broadcasting: 3
I0820 23:38:24.536311       6 log.go:172] (0xc002dee4d0) Reply frame received for 3
I0820 23:38:24.536335       6 log.go:172] (0xc002dee4d0) (0xc0024120a0) Create stream
I0820 23:38:24.536346       6 log.go:172] (0xc002dee4d0) (0xc0024120a0) Stream added, broadcasting: 5
I0820 23:38:24.537367       6 log.go:172] (0xc002dee4d0) Reply frame received for 5
I0820 23:38:24.605527       6 log.go:172] (0xc002dee4d0) Data frame received for 3
I0820 23:38:24.605568       6 log.go:172] (0xc002e503c0) (3) Data frame handling
I0820 23:38:24.605582       6 log.go:172] (0xc002e503c0) (3) Data frame sent
I0820 23:38:24.605602       6 log.go:172] (0xc002dee4d0) Data frame received for 3
I0820 23:38:24.605612       6 log.go:172] (0xc002e503c0) (3) Data frame handling
I0820 23:38:24.605654       6 log.go:172] (0xc002dee4d0) Data frame received for 5
I0820 23:38:24.605696       6 log.go:172] (0xc0024120a0) (5) Data frame handling
I0820 23:38:24.607035       6 log.go:172] (0xc002dee4d0) Data frame received for 1
I0820 23:38:24.607068       6 log.go:172] (0xc001383ea0) (1) Data frame handling
I0820 23:38:24.607081       6 log.go:172] (0xc001383ea0) (1) Data frame sent
I0820 23:38:24.607096       6 log.go:172] (0xc002dee4d0) (0xc001383ea0) Stream removed, broadcasting: 1
I0820 23:38:24.607141       6 log.go:172] (0xc002dee4d0) Go away received
I0820 23:38:24.607211       6 log.go:172] (0xc002dee4d0) (0xc001383ea0) Stream removed, broadcasting: 1
I0820 23:38:24.607230       6 log.go:172] (0xc002dee4d0) (0xc002e503c0) Stream removed, broadcasting: 3
I0820 23:38:24.607240       6 log.go:172] (0xc002dee4d0) (0xc0024120a0) Stream removed, broadcasting: 5
Aug 20 23:38:24.607: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:24.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-699" for this suite.

• [SLOW TEST:6.397 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":182,"skipped":2944,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:24.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:38:24.751: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 20 23:38:29.779: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 20 23:38:29.779: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 20 23:38:33.857: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-6377 /apis/apps/v1/namespaces/deployment-6377/deployments/test-cleanup-deployment 57e07288-0aa4-4172-8cff-818c4d73a813 1958500 1 2020-08-20 23:38:29 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005e21998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-20 23:38:29 +0000 UTC,LastTransitionTime:2020-08-20 23:38:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-20 23:38:32 +0000 UTC,LastTransitionTime:2020-08-20 23:38:29 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 20 23:38:33.861: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-6377 /apis/apps/v1/namespaces/deployment-6377/replicasets/test-cleanup-deployment-55ffc6b7b6 1b3fb59d-19f2-428a-89ec-5919fc42bbcb 1958488 1 2020-08-20 23:38:29 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 57e07288-0aa4-4172-8cff-818c4d73a813 0xc005e21d47 0xc005e21d48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005e21db8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 20 23:38:33.863: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-s72h6" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-s72h6 test-cleanup-deployment-55ffc6b7b6- deployment-6377 /api/v1/namespaces/deployment-6377/pods/test-cleanup-deployment-55ffc6b7b6-s72h6 b773a20e-beba-428f-af44-3639745b6575 1958487 0 2020-08-20 23:38:29 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 1b3fb59d-19f2-428a-89ec-5919fc42bbcb 0xc003946277 0xc003946278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-klwgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-klwgl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-klwgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:38:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:38:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:38:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-20 23:38:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.82,StartTime:2020-08-20 23:38:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-20 23:38:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://5e6911f8b0e1da2c4ad2afa9ed3d555bcc550d6dceba52c3bb28d349d58c9719,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:33.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6377" for this suite.

• [SLOW TEST:9.256 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":183,"skipped":2951,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:33.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-72f00422-aa05-492c-a47f-fbfbf4ad28e0
STEP: Creating a pod to test consume secrets
Aug 20 23:38:33.999: INFO: Waiting up to 5m0s for pod "pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930" in namespace "secrets-9869" to be "success or failure"
Aug 20 23:38:34.019: INFO: Pod "pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116027ms
Aug 20 23:38:36.023: INFO: Pod "pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024046178s
Aug 20 23:38:38.028: INFO: Pod "pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028851631s
STEP: Saw pod success
Aug 20 23:38:38.028: INFO: Pod "pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930" satisfied condition "success or failure"
Aug 20 23:38:38.030: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930 container secret-volume-test: 
STEP: delete the pod
Aug 20 23:38:38.064: INFO: Waiting for pod pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930 to disappear
Aug 20 23:38:38.086: INFO: Pod pod-secrets-e0cc1326-08ec-4808-9267-b976a276f930 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:38.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9869" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2965,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:38.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 20 23:38:38.220: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-214 /api/v1/namespaces/watch-214/configmaps/e2e-watch-test-watch-closed 234f0e52-eb6d-414c-a886-ed5c899824ad 1958540 0 2020-08-20 23:38:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 20 23:38:38.220: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-214 /api/v1/namespaces/watch-214/configmaps/e2e-watch-test-watch-closed 234f0e52-eb6d-414c-a886-ed5c899824ad 1958541 0 2020-08-20 23:38:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 20 23:38:38.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-214 /api/v1/namespaces/watch-214/configmaps/e2e-watch-test-watch-closed 234f0e52-eb6d-414c-a886-ed5c899824ad 1958543 0 2020-08-20 23:38:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 20 23:38:38.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-214 /api/v1/namespaces/watch-214/configmaps/e2e-watch-test-watch-closed 234f0e52-eb6d-414c-a886-ed5c899824ad 1958544 0 2020-08-20 23:38:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:38.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-214" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":185,"skipped":3035,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:38.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:38:38.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 20 23:38:38.516: INFO: stderr: ""
Aug 20 23:38:38.516: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1565" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":186,"skipped":3049,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:38.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0820 23:38:48.621776       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 20 23:38:48.621: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:48.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3463" for this suite.

• [SLOW TEST:10.104 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":187,"skipped":3060,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:48.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-5484c876-7b1b-4cc6-8e75-f0d87491a820
STEP: Creating a pod to test consume secrets
Aug 20 23:38:48.731: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52" in namespace "projected-7214" to be "success or failure"
Aug 20 23:38:48.734: INFO: Pod "pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52": Phase="Pending", Reason="", readiness=false. Elapsed: 3.824842ms
Aug 20 23:38:50.739: INFO: Pod "pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007991917s
Aug 20 23:38:52.743: INFO: Pod "pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012361917s
STEP: Saw pod success
Aug 20 23:38:52.743: INFO: Pod "pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52" satisfied condition "success or failure"
Aug 20 23:38:52.747: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52 container projected-secret-volume-test: 
STEP: delete the pod
Aug 20 23:38:52.781: INFO: Waiting for pod pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52 to disappear
Aug 20 23:38:52.788: INFO: Pod pod-projected-secrets-6451d492-1c10-4482-9169-dda5606a7e52 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:38:52.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7214" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3077,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:38:52.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:38:52.882: INFO: Create a RollingUpdate DaemonSet
Aug 20 23:38:52.885: INFO: Check that daemon pods launch on every node of the cluster
Aug 20 23:38:52.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:38:52.896: INFO: Number of nodes with available pods: 0
Aug 20 23:38:52.896: INFO: Node jerma-worker is running more than one daemon pod
Aug 20 23:38:53.940: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:38:53.950: INFO: Number of nodes with available pods: 0
Aug 20 23:38:53.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 20 23:38:54.934: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:38:54.938: INFO: Number of nodes with available pods: 0
Aug 20 23:38:54.938: INFO: Node jerma-worker is running more than one daemon pod
Aug 20 23:38:55.901: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:38:55.905: INFO: Number of nodes with available pods: 0
Aug 20 23:38:55.905: INFO: Node jerma-worker is running more than one daemon pod
Aug 20 23:38:56.901: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:38:56.904: INFO: Number of nodes with available pods: 2
Aug 20 23:38:56.904: INFO: Number of running nodes: 2, number of available pods: 2
Aug 20 23:38:56.904: INFO: Update the DaemonSet to trigger a rollout
Aug 20 23:38:56.910: INFO: Updating DaemonSet daemon-set
Aug 20 23:39:11.977: INFO: Roll back the DaemonSet before rollout is complete
Aug 20 23:39:11.985: INFO: Updating DaemonSet daemon-set
Aug 20 23:39:11.985: INFO: Make sure DaemonSet rollback is complete
Aug 20 23:39:11.993: INFO: Wrong image for pod: daemon-set-cnmx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 20 23:39:11.993: INFO: Pod daemon-set-cnmx5 is not available
Aug 20 23:39:11.999: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:39:13.003: INFO: Wrong image for pod: daemon-set-cnmx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 20 23:39:13.004: INFO: Pod daemon-set-cnmx5 is not available
Aug 20 23:39:13.007: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:39:14.149: INFO: Wrong image for pod: daemon-set-cnmx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 20 23:39:14.150: INFO: Pod daemon-set-cnmx5 is not available
Aug 20 23:39:14.191: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 23:39:15.003: INFO: Pod daemon-set-klrxz is not available
Aug 20 23:39:15.006: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9733, will wait for the garbage collector to delete the pods
Aug 20 23:39:15.069: INFO: Deleting DaemonSet.extensions daemon-set took: 5.553106ms
Aug 20 23:39:15.370: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.242455ms
Aug 20 23:39:21.673: INFO: Number of nodes with available pods: 0
Aug 20 23:39:21.673: INFO: Number of running nodes: 0, number of available pods: 0
Aug 20 23:39:21.675: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9733/daemonsets","resourceVersion":"1958864"},"items":null}

Aug 20 23:39:21.677: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9733/pods","resourceVersion":"1958864"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:21.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9733" for this suite.

• [SLOW TEST:28.900 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":189,"skipped":3079,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:21.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-659
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-659 to expose endpoints map[]
Aug 20 23:39:21.848: INFO: Get endpoints failed (15.104594ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 20 23:39:22.852: INFO: successfully validated that service endpoint-test2 in namespace services-659 exposes endpoints map[] (1.018565082s elapsed)
STEP: Creating pod pod1 in namespace services-659
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-659 to expose endpoints map[pod1:[80]]
Aug 20 23:39:25.901: INFO: successfully validated that service endpoint-test2 in namespace services-659 exposes endpoints map[pod1:[80]] (3.041857956s elapsed)
STEP: Creating pod pod2 in namespace services-659
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-659 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 20 23:39:29.970: INFO: successfully validated that service endpoint-test2 in namespace services-659 exposes endpoints map[pod1:[80] pod2:[80]] (4.065762229s elapsed)
STEP: Deleting pod pod1 in namespace services-659
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-659 to expose endpoints map[pod2:[80]]
Aug 20 23:39:31.075: INFO: successfully validated that service endpoint-test2 in namespace services-659 exposes endpoints map[pod2:[80]] (1.100838893s elapsed)
STEP: Deleting pod pod2 in namespace services-659
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-659 to expose endpoints map[]
Aug 20 23:39:32.161: INFO: successfully validated that service endpoint-test2 in namespace services-659 exposes endpoints map[] (1.081011261s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-659" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.605 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":190,"skipped":3090,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:32.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 20 23:39:32.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef" in namespace "downward-api-4258" to be "success or failure"
Aug 20 23:39:32.730: INFO: Pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef": Phase="Pending", Reason="", readiness=false. Elapsed: 170.946266ms
Aug 20 23:39:34.734: INFO: Pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174982901s
Aug 20 23:39:36.738: INFO: Pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef": Phase="Running", Reason="", readiness=true. Elapsed: 4.17908407s
Aug 20 23:39:38.743: INFO: Pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183520799s
STEP: Saw pod success
Aug 20 23:39:38.743: INFO: Pod "downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef" satisfied condition "success or failure"
Aug 20 23:39:38.746: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef container client-container: 
STEP: delete the pod
Aug 20 23:39:38.785: INFO: Waiting for pod downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef to disappear
Aug 20 23:39:38.789: INFO: Pod downwardapi-volume-29c44d7c-7421-4db7-8fb3-c7976f0e11ef no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:38.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4258" for this suite.

• [SLOW TEST:6.530 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3141,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:38.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 20 23:39:38.989: INFO: Waiting up to 5m0s for pod "client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268" in namespace "containers-2252" to be "success or failure"
Aug 20 23:39:38.993: INFO: Pod "client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268": Phase="Pending", Reason="", readiness=false. Elapsed: 3.777558ms
Aug 20 23:39:41.006: INFO: Pod "client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016402893s
Aug 20 23:39:43.030: INFO: Pod "client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040505235s
STEP: Saw pod success
Aug 20 23:39:43.030: INFO: Pod "client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268" satisfied condition "success or failure"
Aug 20 23:39:43.032: INFO: Trying to get logs from node jerma-worker2 pod client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268 container test-container: 
STEP: delete the pod
Aug 20 23:39:43.212: INFO: Waiting for pod client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268 to disappear
Aug 20 23:39:43.271: INFO: Pod client-containers-927cfb55-0dc2-4537-828b-6ecb1f197268 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2252" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3143,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:43.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 20 23:39:47.892: INFO: Successfully updated pod "annotationupdatec2728a7e-787e-4d33-bae7-7887a4f9ca1e"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:49.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3613" for this suite.

• [SLOW TEST:6.655 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3145,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:49.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ea06e65c-171e-4512-a9a7-d0e9d5a364ff
STEP: Creating a pod to test consume secrets
Aug 20 23:39:50.025: INFO: Waiting up to 5m0s for pod "pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8" in namespace "secrets-2374" to be "success or failure"
Aug 20 23:39:50.029: INFO: Pod "pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338228ms
Aug 20 23:39:52.033: INFO: Pod "pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008839262s
Aug 20 23:39:54.038: INFO: Pod "pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013325819s
STEP: Saw pod success
Aug 20 23:39:54.038: INFO: Pod "pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8" satisfied condition "success or failure"
Aug 20 23:39:54.041: INFO: Trying to get logs from node jerma-worker pod pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8 container secret-volume-test: 
STEP: delete the pod
Aug 20 23:39:54.091: INFO: Waiting for pod pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8 to disappear
Aug 20 23:39:54.110: INFO: Pod pod-secrets-54a47287-311d-4392-87ac-d8be86e160d8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:39:54.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2374" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3155,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:39:54.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-09088ca0-efaf-47f4-bf6d-6e94f2cb7dc1 in namespace container-probe-4321
Aug 20 23:39:58.213: INFO: Started pod busybox-09088ca0-efaf-47f4-bf6d-6e94f2cb7dc1 in namespace container-probe-4321
STEP: checking the pod's current state and verifying that restartCount is present
Aug 20 23:39:58.216: INFO: Initial restart count of pod busybox-09088ca0-efaf-47f4-bf6d-6e94f2cb7dc1 is 0
Aug 20 23:40:48.505: INFO: Restart count of pod container-probe-4321/busybox-09088ca0-efaf-47f4-bf6d-6e94f2cb7dc1 is now 1 (50.28855265s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:40:48.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4321" for this suite.

• [SLOW TEST:54.409 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3166,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:40:48.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e61be3ad-596e-4503-a0fe-bb7441831d77
STEP: Creating a pod to test consume secrets
Aug 20 23:40:49.060: INFO: Waiting up to 5m0s for pod "pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1" in namespace "secrets-7032" to be "success or failure"
Aug 20 23:40:49.078: INFO: Pod "pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.88758ms
Aug 20 23:40:51.096: INFO: Pod "pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035623304s
Aug 20 23:40:53.100: INFO: Pod "pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039527038s
STEP: Saw pod success
Aug 20 23:40:53.100: INFO: Pod "pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1" satisfied condition "success or failure"
Aug 20 23:40:53.103: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1 container secret-volume-test: 
STEP: delete the pod
Aug 20 23:40:53.121: INFO: Waiting for pod pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1 to disappear
Aug 20 23:40:53.137: INFO: Pod pod-secrets-4dc80b8a-0892-4e41-bfa0-2216a9f594e1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:40:53.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7032" for this suite.
STEP: Destroying namespace "secret-namespace-980" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3186,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:40:53.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 20 23:40:53.411: INFO: Waiting up to 5m0s for pod "pod-a47a2cd2-1576-439b-8235-4395126683fb" in namespace "emptydir-3736" to be "success or failure"
Aug 20 23:40:53.564: INFO: Pod "pod-a47a2cd2-1576-439b-8235-4395126683fb": Phase="Pending", Reason="", readiness=false. Elapsed: 153.289154ms
Aug 20 23:40:55.568: INFO: Pod "pod-a47a2cd2-1576-439b-8235-4395126683fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157186481s
Aug 20 23:40:57.572: INFO: Pod "pod-a47a2cd2-1576-439b-8235-4395126683fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161362922s
STEP: Saw pod success
Aug 20 23:40:57.572: INFO: Pod "pod-a47a2cd2-1576-439b-8235-4395126683fb" satisfied condition "success or failure"
Aug 20 23:40:57.575: INFO: Trying to get logs from node jerma-worker2 pod pod-a47a2cd2-1576-439b-8235-4395126683fb container test-container: 
STEP: delete the pod
Aug 20 23:40:57.595: INFO: Waiting for pod pod-a47a2cd2-1576-439b-8235-4395126683fb to disappear
Aug 20 23:40:57.599: INFO: Pod pod-a47a2cd2-1576-439b-8235-4395126683fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:40:57.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3736" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3189,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:40:57.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:01.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9782" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:01.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 20 23:41:01.823: INFO: Waiting up to 5m0s for pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887" in namespace "emptydir-4065" to be "success or failure"
Aug 20 23:41:01.863: INFO: Pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887": Phase="Pending", Reason="", readiness=false. Elapsed: 40.298606ms
Aug 20 23:41:03.934: INFO: Pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111036784s
Aug 20 23:41:05.938: INFO: Pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887": Phase="Running", Reason="", readiness=true. Elapsed: 4.115227978s
Aug 20 23:41:07.942: INFO: Pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119399724s
STEP: Saw pod success
Aug 20 23:41:07.942: INFO: Pod "pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887" satisfied condition "success or failure"
Aug 20 23:41:07.945: INFO: Trying to get logs from node jerma-worker2 pod pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887 container test-container: 
STEP: delete the pod
Aug 20 23:41:08.050: INFO: Waiting for pod pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887 to disappear
Aug 20 23:41:08.054: INFO: Pod pod-8240a8ff-4b4d-49d7-a172-30c2e71c1887 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:08.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4065" for this suite.

• [SLOW TEST:6.328 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3249,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:08.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 20 23:41:11.223: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:11.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6534" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3253,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:11.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:41:12.323: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:41:14.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563672, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563672, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563672, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563672, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:41:17.364: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:17.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1217" for this suite.
STEP: Destroying namespace "webhook-1217-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.142 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":201,"skipped":3262,"failed":0}
S
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:17.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:41:17.620: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-974da00d-e154-4efa-8ee9-95dbe1bda1b7" in namespace "security-context-test-7466" to be "success or failure"
Aug 20 23:41:17.623: INFO: Pod "alpine-nnp-false-974da00d-e154-4efa-8ee9-95dbe1bda1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.942118ms
Aug 20 23:41:19.628: INFO: Pod "alpine-nnp-false-974da00d-e154-4efa-8ee9-95dbe1bda1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007102585s
Aug 20 23:41:21.632: INFO: Pod "alpine-nnp-false-974da00d-e154-4efa-8ee9-95dbe1bda1b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011158039s
Aug 20 23:41:21.632: INFO: Pod "alpine-nnp-false-974da00d-e154-4efa-8ee9-95dbe1bda1b7" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:21.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7466" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:21.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-6c4e0a20-5023-49b3-a562-c8e0e026f0de
STEP: Creating a pod to test consume configMaps
Aug 20 23:41:21.739: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98" in namespace "projected-5038" to be "success or failure"
Aug 20 23:41:21.744: INFO: Pod "pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579399ms
Aug 20 23:41:23.748: INFO: Pod "pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008821241s
Aug 20 23:41:25.752: INFO: Pod "pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01291584s
STEP: Saw pod success
Aug 20 23:41:25.752: INFO: Pod "pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98" satisfied condition "success or failure"
Aug 20 23:41:25.755: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 20 23:41:25.787: INFO: Waiting for pod pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98 to disappear
Aug 20 23:41:25.792: INFO: Pod pod-projected-configmaps-6edc7386-e6bf-430e-8038-67b81a4a7d98 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:25.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5038" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:25.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 20 23:41:25.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 20 23:41:36.178: INFO: >>> kubeConfig: /root/.kube/config
Aug 20 23:41:39.144: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-974" for this suite.

• [SLOW TEST:22.751 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":204,"skipped":3362,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:48.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 20 23:41:55.200: INFO: Successfully updated pod "adopt-release-9bd78"
STEP: Checking that the Job readopts the Pod
Aug 20 23:41:55.200: INFO: Waiting up to 15m0s for pod "adopt-release-9bd78" in namespace "job-6195" to be "adopted"
Aug 20 23:41:55.248: INFO: Pod "adopt-release-9bd78": Phase="Running", Reason="", readiness=true. Elapsed: 48.029007ms
Aug 20 23:41:57.253: INFO: Pod "adopt-release-9bd78": Phase="Running", Reason="", readiness=true. Elapsed: 2.052183191s
Aug 20 23:41:57.253: INFO: Pod "adopt-release-9bd78" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 20 23:41:57.763: INFO: Successfully updated pod "adopt-release-9bd78"
STEP: Checking that the Job releases the Pod
Aug 20 23:41:57.763: INFO: Waiting up to 15m0s for pod "adopt-release-9bd78" in namespace "job-6195" to be "released"
Aug 20 23:41:57.769: INFO: Pod "adopt-release-9bd78": Phase="Running", Reason="", readiness=true. Elapsed: 5.803464ms
Aug 20 23:41:59.775: INFO: Pod "adopt-release-9bd78": Phase="Running", Reason="", readiness=true. Elapsed: 2.011995345s
Aug 20 23:41:59.775: INFO: Pod "adopt-release-9bd78" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:41:59.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6195" for this suite.

• [SLOW TEST:11.225 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":205,"skipped":3381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:41:59.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:42:00.853: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:42:02.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563720, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563720, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563721, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563720, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:42:06.176: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:07.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1481" for this suite.
STEP: Destroying namespace "webhook-1481-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.367 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":206,"skipped":3425,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:08.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:42:08.550: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8091" for this suite.

• [SLOW TEST:5.145 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":208,"skipped":3439,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:14.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 20 23:42:24.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 23:42:24.865: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 23:42:26.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 23:42:26.905: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 23:42:28.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 23:42:28.893: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:28.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9298" for this suite.

• [SLOW TEST:14.230 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3453,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:28.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6974
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-6974
I0820 23:42:29.059526       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6974, replica count: 2
I0820 23:42:32.110023       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0820 23:42:35.110283       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 20 23:42:35.110: INFO: Creating new exec pod
Aug 20 23:42:42.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6974 execpodvkd48 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 20 23:42:45.497: INFO: stderr: "I0820 23:42:45.370029    3392 log.go:172] (0xc000ce6b00) (0xc000653ea0) Create stream\nI0820 23:42:45.370065    3392 log.go:172] (0xc000ce6b00) (0xc000653ea0) Stream added, broadcasting: 1\nI0820 23:42:45.376892    3392 log.go:172] (0xc000ce6b00) Reply frame received for 1\nI0820 23:42:45.376939    3392 log.go:172] (0xc000ce6b00) (0xc0005be640) Create stream\nI0820 23:42:45.376954    3392 log.go:172] (0xc000ce6b00) (0xc0005be640) Stream added, broadcasting: 3\nI0820 23:42:45.382047    3392 log.go:172] (0xc000ce6b00) Reply frame received for 3\nI0820 23:42:45.382078    3392 log.go:172] (0xc000ce6b00) (0xc00072d400) Create stream\nI0820 23:42:45.382092    3392 log.go:172] (0xc000ce6b00) (0xc00072d400) Stream added, broadcasting: 5\nI0820 23:42:45.384073    3392 log.go:172] (0xc000ce6b00) Reply frame received for 5\nI0820 23:42:45.482419    3392 log.go:172] (0xc000ce6b00) Data frame received for 5\nI0820 23:42:45.482454    3392 log.go:172] (0xc00072d400) (5) Data frame handling\nI0820 23:42:45.482475    3392 log.go:172] (0xc00072d400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0820 23:42:45.482707    3392 log.go:172] (0xc000ce6b00) Data frame received for 5\nI0820 23:42:45.482734    3392 log.go:172] (0xc00072d400) (5) Data frame handling\nI0820 23:42:45.482752    3392 log.go:172] (0xc00072d400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0820 23:42:45.483224    3392 log.go:172] (0xc000ce6b00) Data frame received for 5\nI0820 23:42:45.483258    3392 log.go:172] (0xc00072d400) (5) Data frame handling\nI0820 23:42:45.483356    3392 log.go:172] (0xc000ce6b00) Data frame received for 3\nI0820 23:42:45.483370    3392 log.go:172] (0xc0005be640) (3) Data frame handling\nI0820 23:42:45.485210    3392 log.go:172] (0xc000ce6b00) Data frame received for 1\nI0820 23:42:45.485234    3392 log.go:172] (0xc000653ea0) (1) Data frame handling\nI0820 23:42:45.485247    3392 log.go:172] (0xc000653ea0) (1) Data frame sent\nI0820 23:42:45.485260    3392 log.go:172] (0xc000ce6b00) (0xc000653ea0) Stream removed, broadcasting: 1\nI0820 23:42:45.485281    3392 log.go:172] (0xc000ce6b00) Go away received\nI0820 23:42:45.485634    3392 log.go:172] (0xc000ce6b00) (0xc000653ea0) Stream removed, broadcasting: 1\nI0820 23:42:45.485653    3392 log.go:172] (0xc000ce6b00) (0xc0005be640) Stream removed, broadcasting: 3\nI0820 23:42:45.485670    3392 log.go:172] (0xc000ce6b00) (0xc00072d400) Stream removed, broadcasting: 5\n"
Aug 20 23:42:45.497: INFO: stdout: ""
Aug 20 23:42:45.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6974 execpodvkd48 -- /bin/sh -x -c nc -zv -t -w 2 10.102.45.110 80'
Aug 20 23:42:45.714: INFO: stderr: "I0820 23:42:45.638201    3425 log.go:172] (0xc000548d10) (0xc0006d9ae0) Create stream\nI0820 23:42:45.638261    3425 log.go:172] (0xc000548d10) (0xc0006d9ae0) Stream added, broadcasting: 1\nI0820 23:42:45.641569    3425 log.go:172] (0xc000548d10) Reply frame received for 1\nI0820 23:42:45.641617    3425 log.go:172] (0xc000548d10) (0xc0009ae000) Create stream\nI0820 23:42:45.641633    3425 log.go:172] (0xc000548d10) (0xc0009ae000) Stream added, broadcasting: 3\nI0820 23:42:45.642655    3425 log.go:172] (0xc000548d10) Reply frame received for 3\nI0820 23:42:45.642709    3425 log.go:172] (0xc000548d10) (0xc000aac000) Create stream\nI0820 23:42:45.642735    3425 log.go:172] (0xc000548d10) (0xc000aac000) Stream added, broadcasting: 5\nI0820 23:42:45.643960    3425 log.go:172] (0xc000548d10) Reply frame received for 5\nI0820 23:42:45.705231    3425 log.go:172] (0xc000548d10) Data frame received for 3\nI0820 23:42:45.705269    3425 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0820 23:42:45.705328    3425 log.go:172] (0xc000548d10) Data frame received for 5\nI0820 23:42:45.705365    3425 log.go:172] (0xc000aac000) (5) Data frame handling\nI0820 23:42:45.705388    3425 log.go:172] (0xc000aac000) (5) Data frame sent\nI0820 23:42:45.705402    3425 log.go:172] (0xc000548d10) Data frame received for 5\nI0820 23:42:45.705416    3425 log.go:172] (0xc000aac000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.45.110 80\nConnection to 10.102.45.110 80 port [tcp/http] succeeded!\nI0820 23:42:45.706678    3425 log.go:172] (0xc000548d10) Data frame received for 1\nI0820 23:42:45.706697    3425 log.go:172] (0xc0006d9ae0) (1) Data frame handling\nI0820 23:42:45.706709    3425 log.go:172] (0xc0006d9ae0) (1) Data frame sent\nI0820 23:42:45.706719    3425 log.go:172] (0xc000548d10) (0xc0006d9ae0) Stream removed, broadcasting: 1\nI0820 23:42:45.706736    3425 log.go:172] (0xc000548d10) Go away received\nI0820 23:42:45.707126    3425 log.go:172] (0xc000548d10) (0xc0006d9ae0) Stream removed, broadcasting: 1\nI0820 23:42:45.707146    3425 log.go:172] (0xc000548d10) (0xc0009ae000) Stream removed, broadcasting: 3\nI0820 23:42:45.707157    3425 log.go:172] (0xc000548d10) (0xc000aac000) Stream removed, broadcasting: 5\n"
Aug 20 23:42:45.714: INFO: stdout: ""
Aug 20 23:42:45.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6974 execpodvkd48 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31744'
Aug 20 23:42:45.913: INFO: stderr: "I0820 23:42:45.846418    3447 log.go:172] (0xc0000f6f20) (0xc000aac000) Create stream\nI0820 23:42:45.846469    3447 log.go:172] (0xc0000f6f20) (0xc000aac000) Stream added, broadcasting: 1\nI0820 23:42:45.848710    3447 log.go:172] (0xc0000f6f20) Reply frame received for 1\nI0820 23:42:45.848860    3447 log.go:172] (0xc0000f6f20) (0xc0009ec000) Create stream\nI0820 23:42:45.848878    3447 log.go:172] (0xc0000f6f20) (0xc0009ec000) Stream added, broadcasting: 3\nI0820 23:42:45.849773    3447 log.go:172] (0xc0000f6f20) Reply frame received for 3\nI0820 23:42:45.849826    3447 log.go:172] (0xc0000f6f20) (0xc0005cfa40) Create stream\nI0820 23:42:45.849858    3447 log.go:172] (0xc0000f6f20) (0xc0005cfa40) Stream added, broadcasting: 5\nI0820 23:42:45.850638    3447 log.go:172] (0xc0000f6f20) Reply frame received for 5\nI0820 23:42:45.907110    3447 log.go:172] (0xc0000f6f20) Data frame received for 3\nI0820 23:42:45.907139    3447 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0820 23:42:45.907185    3447 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0820 23:42:45.907234    3447 log.go:172] (0xc0005cfa40) (5) Data frame handling\nI0820 23:42:45.907270    3447 log.go:172] (0xc0005cfa40) (5) Data frame sent\nI0820 23:42:45.907289    3447 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0820 23:42:45.907304    3447 log.go:172] (0xc0005cfa40) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31744\nConnection to 172.18.0.6 31744 port [tcp/31744] succeeded!\nI0820 23:42:45.908575    3447 log.go:172] (0xc0000f6f20) Data frame received for 1\nI0820 23:42:45.908590    3447 log.go:172] (0xc000aac000) (1) Data frame handling\nI0820 23:42:45.908596    3447 log.go:172] (0xc000aac000) (1) Data frame sent\nI0820 23:42:45.908903    3447 log.go:172] (0xc0000f6f20) (0xc000aac000) Stream removed, broadcasting: 1\nI0820 23:42:45.908954    3447 log.go:172] (0xc0000f6f20) Go away received\nI0820 23:42:45.909252    3447 log.go:172] (0xc0000f6f20) (0xc000aac000) Stream removed, broadcasting: 1\nI0820 23:42:45.909270    3447 log.go:172] (0xc0000f6f20) (0xc0009ec000) Stream removed, broadcasting: 3\nI0820 23:42:45.909279    3447 log.go:172] (0xc0000f6f20) (0xc0005cfa40) Stream removed, broadcasting: 5\n"
Aug 20 23:42:45.913: INFO: stdout: ""
Aug 20 23:42:45.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6974 execpodvkd48 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31744'
Aug 20 23:42:46.153: INFO: stderr: "I0820 23:42:46.064572    3468 log.go:172] (0xc000105130) (0xc000695c20) Create stream\nI0820 23:42:46.064635    3468 log.go:172] (0xc000105130) (0xc000695c20) Stream added, broadcasting: 1\nI0820 23:42:46.067250    3468 log.go:172] (0xc000105130) Reply frame received for 1\nI0820 23:42:46.067300    3468 log.go:172] (0xc000105130) (0xc00090c000) Create stream\nI0820 23:42:46.067312    3468 log.go:172] (0xc000105130) (0xc00090c000) Stream added, broadcasting: 3\nI0820 23:42:46.068356    3468 log.go:172] (0xc000105130) Reply frame received for 3\nI0820 23:42:46.068382    3468 log.go:172] (0xc000105130) (0xc000162000) Create stream\nI0820 23:42:46.068391    3468 log.go:172] (0xc000105130) (0xc000162000) Stream added, broadcasting: 5\nI0820 23:42:46.069523    3468 log.go:172] (0xc000105130) Reply frame received for 5\nI0820 23:42:46.143459    3468 log.go:172] (0xc000105130) Data frame received for 5\nI0820 23:42:46.143505    3468 log.go:172] (0xc000162000) (5) Data frame handling\nI0820 23:42:46.143544    3468 log.go:172] (0xc000162000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31744\nConnection to 172.18.0.3 31744 port [tcp/31744] succeeded!\nI0820 23:42:46.143635    3468 log.go:172] (0xc000105130) Data frame received for 5\nI0820 23:42:46.143659    3468 log.go:172] (0xc000162000) (5) Data frame handling\nI0820 23:42:46.144129    3468 log.go:172] (0xc000105130) Data frame received for 3\nI0820 23:42:46.144153    3468 log.go:172] (0xc00090c000) (3) Data frame handling\nI0820 23:42:46.145320    3468 log.go:172] (0xc000105130) Data frame received for 1\nI0820 23:42:46.145343    3468 log.go:172] (0xc000695c20) (1) Data frame handling\nI0820 23:42:46.145364    3468 log.go:172] (0xc000695c20) (1) Data frame sent\nI0820 23:42:46.145372    3468 log.go:172] (0xc000105130) (0xc000695c20) Stream removed, broadcasting: 1\nI0820 23:42:46.145381    3468 log.go:172] (0xc000105130) Go away received\nI0820 23:42:46.145929    3468 log.go:172] (0xc000105130) (0xc000695c20) Stream removed, broadcasting: 1\nI0820 23:42:46.145958    3468 log.go:172] (0xc000105130) (0xc00090c000) Stream removed, broadcasting: 3\nI0820 23:42:46.145968    3468 log.go:172] (0xc000105130) (0xc000162000) Stream removed, broadcasting: 5\n"
Aug 20 23:42:46.153: INFO: stdout: ""
Aug 20 23:42:46.153: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:46.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6974" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.293 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":210,"skipped":3455,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:46.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ee3085d1-3103-470b-a3af-d536be0f7d8d
STEP: Creating a pod to test consume configMaps
Aug 20 23:42:46.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859" in namespace "configmap-2670" to be "success or failure"
Aug 20 23:42:46.339: INFO: Pod "pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859": Phase="Pending", Reason="", readiness=false. Elapsed: 16.606429ms
Aug 20 23:42:48.344: INFO: Pod "pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020838408s
Aug 20 23:42:50.347: INFO: Pod "pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023931467s
STEP: Saw pod success
Aug 20 23:42:50.347: INFO: Pod "pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859" satisfied condition "success or failure"
Aug 20 23:42:50.349: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859 container configmap-volume-test: 
STEP: delete the pod
Aug 20 23:42:50.391: INFO: Waiting for pod pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859 to disappear
Aug 20 23:42:50.399: INFO: Pod pod-configmaps-aac18a32-a1f4-437a-9aad-8c0f00f9e859 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:50.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2670" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3455,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:50.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 20 23:42:50.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-13'
Aug 20 23:42:50.562: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 23:42:50.562: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 20 23:42:52.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-13'
Aug 20 23:42:52.823: INFO: stderr: ""
Aug 20 23:42:52.823: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-13" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":212,"skipped":3462,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:52.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-96bf1894-3c63-4683-9644-1f6f85b98043
STEP: Creating a pod to test consume secrets
Aug 20 23:42:53.237: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42" in namespace "projected-9450" to be "success or failure"
Aug 20 23:42:53.323: INFO: Pod "pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42": Phase="Pending", Reason="", readiness=false. Elapsed: 86.075188ms
Aug 20 23:42:55.327: INFO: Pod "pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090128416s
Aug 20 23:42:57.331: INFO: Pod "pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094589936s
STEP: Saw pod success
Aug 20 23:42:57.332: INFO: Pod "pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42" satisfied condition "success or failure"
Aug 20 23:42:57.335: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42 container projected-secret-volume-test: 
STEP: delete the pod
Aug 20 23:42:57.395: INFO: Waiting for pod pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42 to disappear
Aug 20 23:42:57.399: INFO: Pod pod-projected-secrets-98942d86-bf99-4797-bdb0-be85826e0b42 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:42:57.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9450" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3531,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:42:57.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 20 23:42:57.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7070 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 20 23:42:57.583: INFO: stderr: ""
Aug 20 23:42:57.583: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 20 23:42:57.583: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 20 23:42:57.583: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7070" to be "running and ready, or succeeded"
Aug 20 23:42:57.599: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.914137ms
Aug 20 23:42:59.695: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112104882s
Aug 20 23:43:01.699: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.116211329s
Aug 20 23:43:01.700: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 20 23:43:01.700: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 20 23:43:01.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070'
Aug 20 23:43:01.806: INFO: stderr: ""
Aug 20 23:43:01.806: INFO: stdout: "I0820 23:43:00.012138       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/cbwp 249\nI0820 23:43:00.212329       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/tgc 448\nI0820 23:43:00.412323       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/7hv 367\nI0820 23:43:00.612333       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/gbtv 525\nI0820 23:43:00.812313       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/4gb 300\nI0820 23:43:01.012359       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/9d8j 568\nI0820 23:43:01.212380       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/n5qt 401\nI0820 23:43:01.412300       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/h9x 268\nI0820 23:43:01.612347       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/c8fs 592\n"
STEP: limiting log lines
Aug 20 23:43:01.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070 --tail=1'
Aug 20 23:43:01.923: INFO: stderr: ""
Aug 20 23:43:01.923: INFO: stdout: "I0820 23:43:01.812326       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/72m 580\n"
Aug 20 23:43:01.923: INFO: got output "I0820 23:43:01.812326       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/72m 580\n"
STEP: limiting log bytes
Aug 20 23:43:01.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070 --limit-bytes=1'
Aug 20 23:43:02.021: INFO: stderr: ""
Aug 20 23:43:02.021: INFO: stdout: "I"
Aug 20 23:43:02.021: INFO: got output "I"
STEP: exposing timestamps
Aug 20 23:43:02.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070 --tail=1 --timestamps'
Aug 20 23:43:02.131: INFO: stderr: ""
Aug 20 23:43:02.131: INFO: stdout: "2020-08-20T23:43:02.012483586Z I0820 23:43:02.012299       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/589j 360\n"
Aug 20 23:43:02.131: INFO: got output "2020-08-20T23:43:02.012483586Z I0820 23:43:02.012299       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/589j 360\n"
STEP: restricting to a time range
Aug 20 23:43:04.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070 --since=1s'
Aug 20 23:43:04.742: INFO: stderr: ""
Aug 20 23:43:04.742: INFO: stdout: "I0820 23:43:03.812290       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/lfth 386\nI0820 23:43:04.012349       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/k9q 331\nI0820 23:43:04.212429       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/4gw 514\nI0820 23:43:04.412300       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/5nl9 220\nI0820 23:43:04.612307       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/ztdl 581\n"
Aug 20 23:43:04.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7070 --since=24h'
Aug 20 23:43:04.878: INFO: stderr: ""
Aug 20 23:43:04.878: INFO: stdout: "I0820 23:43:00.012138       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/cbwp 249\nI0820 23:43:00.212329       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/tgc 448\nI0820 23:43:00.412323       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/7hv 367\nI0820 23:43:00.612333       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/gbtv 525\nI0820 23:43:00.812313       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/4gb 300\nI0820 23:43:01.012359       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/9d8j 568\nI0820 23:43:01.212380       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/n5qt 401\nI0820 23:43:01.412300       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/h9x 268\nI0820 23:43:01.612347       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/c8fs 592\nI0820 23:43:01.812326       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/72m 580\nI0820 23:43:02.012299       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/589j 360\nI0820 23:43:02.212299       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/vpc 237\nI0820 23:43:02.412366       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/82lk 534\nI0820 23:43:02.612305       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/29x 299\nI0820 23:43:02.812365       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/wg8 434\nI0820 23:43:03.012279       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/4lc9 534\nI0820 23:43:03.212315       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/r7q 378\nI0820 23:43:03.412342       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/vhb9 216\nI0820 23:43:03.612265       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/lhtt 442\nI0820 23:43:03.812290       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/lfth 386\nI0820 23:43:04.012349       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/k9q 331\nI0820 23:43:04.212429       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/4gw 514\nI0820 23:43:04.412300       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/5nl9 220\nI0820 23:43:04.612307       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/ztdl 581\nI0820 23:43:04.812264       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/9mp 399\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 20 23:43:04.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7070'
Aug 20 23:43:08.118: INFO: stderr: ""
Aug 20 23:43:08.118: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:43:08.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7070" for this suite.

• [SLOW TEST:10.723 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":214,"skipped":3540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:43:08.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:43:09.182: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:43:11.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 23:43:13.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733563789, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:43:16.672: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:43:16.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2190" for this suite.
STEP: Destroying namespace "webhook-2190-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.686 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":215,"skipped":3574,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:43:16.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:43:24.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6525" for this suite.

• [SLOW TEST:8.144 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3593,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:43:24.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 20 23:43:29.104: INFO: Pod pod-hostip-2e61f4d8-8cf1-419f-9d6f-7e3c1e5c56a4 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:43:29.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-102" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3601,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:43:29.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 20 23:43:29.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2104'
Aug 20 23:43:29.268: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 23:43:29.268: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 20 23:43:31.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2104'
Aug 20 23:43:31.406: INFO: stderr: ""
Aug 20 23:43:31.406: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:43:31.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2104" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":218,"skipped":3616,"failed":0}

------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:43:31.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 20 23:43:32.974: INFO: Pod name wrapped-volume-race-9babed92-f349-4935-b4be-25b6568ccaa6: Found 0 pods out of 5
Aug 20 23:43:37.983: INFO: Pod name wrapped-volume-race-9babed92-f349-4935-b4be-25b6568ccaa6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9babed92-f349-4935-b4be-25b6568ccaa6 in namespace emptydir-wrapper-5574, will wait for the garbage collector to delete the pods
Aug 20 23:43:52.099: INFO: Deleting ReplicationController wrapped-volume-race-9babed92-f349-4935-b4be-25b6568ccaa6 took: 8.227228ms
Aug 20 23:43:52.400: INFO: Terminating ReplicationController wrapped-volume-race-9babed92-f349-4935-b4be-25b6568ccaa6 pods took: 300.702443ms
STEP: Creating RC which spawns configmap-volume pods
Aug 20 23:44:02.654: INFO: Pod name wrapped-volume-race-0c5e04f8-cf80-4871-9609-a47a75b4872d: Found 0 pods out of 5
Aug 20 23:44:07.659: INFO: Pod name wrapped-volume-race-0c5e04f8-cf80-4871-9609-a47a75b4872d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0c5e04f8-cf80-4871-9609-a47a75b4872d in namespace emptydir-wrapper-5574, will wait for the garbage collector to delete the pods
Aug 20 23:44:21.892: INFO: Deleting ReplicationController wrapped-volume-race-0c5e04f8-cf80-4871-9609-a47a75b4872d took: 65.322313ms
Aug 20 23:44:22.192: INFO: Terminating ReplicationController wrapped-volume-race-0c5e04f8-cf80-4871-9609-a47a75b4872d pods took: 300.240313ms
STEP: Creating RC which spawns configmap-volume pods
Aug 20 23:44:31.840: INFO: Pod name wrapped-volume-race-ccbf8a96-f393-4142-b655-78cb52c72f9b: Found 0 pods out of 5
Aug 20 23:44:36.848: INFO: Pod name wrapped-volume-race-ccbf8a96-f393-4142-b655-78cb52c72f9b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ccbf8a96-f393-4142-b655-78cb52c72f9b in namespace emptydir-wrapper-5574, will wait for the garbage collector to delete the pods
Aug 20 23:44:52.963: INFO: Deleting ReplicationController wrapped-volume-race-ccbf8a96-f393-4142-b655-78cb52c72f9b took: 13.643168ms
Aug 20 23:44:53.263: INFO: Terminating ReplicationController wrapped-volume-race-ccbf8a96-f393-4142-b655-78cb52c72f9b pods took: 300.253916ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:45:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5574" for this suite.

• [SLOW TEST:91.872 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":219,"skipped":3616,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:45:03.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4355
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4355
STEP: Creating statefulset with conflicting port in namespace statefulset-4355
STEP: Waiting until pod test-pod will start running in namespace statefulset-4355
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4355
Aug 20 23:45:09.903: INFO: Observed stateful pod in namespace: statefulset-4355, name: ss-0, uid: 6041885a-8f76-4117-a1bf-265a39b8a481, status phase: Failed. Waiting for statefulset controller to delete.
Aug 20 23:45:09.915: INFO: Observed stateful pod in namespace: statefulset-4355, name: ss-0, uid: 6041885a-8f76-4117-a1bf-265a39b8a481, status phase: Failed. Waiting for statefulset controller to delete.
Aug 20 23:45:09.922: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4355
STEP: Removing pod with conflicting port in namespace statefulset-4355
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4355 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 20 23:45:16.060: INFO: Deleting all statefulset in ns statefulset-4355
Aug 20 23:45:16.062: INFO: Scaling statefulset ss to 0
Aug 20 23:45:26.077: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 23:45:26.079: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:45:26.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4355" for this suite.

• [SLOW TEST:23.007 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":220,"skipped":3667,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:45:26.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-ae3db70b-6877-45cc-afad-296db24e8b43
STEP: Creating a pod to test consume configMaps
Aug 20 23:45:26.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd" in namespace "projected-8592" to be "success or failure"
Aug 20 23:45:26.568: INFO: Pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.717395ms
Aug 20 23:45:28.571: INFO: Pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012952972s
Aug 20 23:45:30.574: INFO: Pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.016239493s
Aug 20 23:45:32.579: INFO: Pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020457541s
STEP: Saw pod success
Aug 20 23:45:32.579: INFO: Pod "pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd" satisfied condition "success or failure"
Aug 20 23:45:32.581: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd container projected-configmap-volume-test: 
STEP: delete the pod
Aug 20 23:45:32.674: INFO: Waiting for pod pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd to disappear
Aug 20 23:45:32.681: INFO: Pod pod-projected-configmaps-f0c4dc21-0211-4751-af18-e8609941b0cd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:45:32.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8592" for this suite.

• [SLOW TEST:6.383 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3677,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:45:32.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2756, will wait for the garbage collector to delete the pods
Aug 20 23:45:36.889: INFO: Deleting Job.batch foo took: 6.371434ms
Aug 20 23:45:36.989: INFO: Terminating Job.batch foo pods took: 100.265588ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:46:12.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2756" for this suite.

• [SLOW TEST:39.596 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":222,"skipped":3701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:46:12.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:46:12.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 20 23:46:13.306: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:13Z generation:1 name:name1 resourceVersion:1962149 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:65ba0c57-58c6-48e9-acd5-4d28237ce289] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 20 23:46:23.311: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:23Z generation:1 name:name2 resourceVersion:1962195 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64363cbb-980a-437a-be65-1dd3d579404c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 20 23:46:33.317: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:13Z generation:2 name:name1 resourceVersion:1962225 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:65ba0c57-58c6-48e9-acd5-4d28237ce289] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 20 23:46:43.327: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:23Z generation:2 name:name2 resourceVersion:1962254 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64363cbb-980a-437a-be65-1dd3d579404c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 20 23:46:53.335: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:13Z generation:2 name:name1 resourceVersion:1962284 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:65ba0c57-58c6-48e9-acd5-4d28237ce289] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 20 23:47:03.343: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-20T23:46:23Z generation:2 name:name2 resourceVersion:1962314 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64363cbb-980a-437a-be65-1dd3d579404c] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:47:13.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-7049" for this suite.

• [SLOW TEST:61.590 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":223,"skipped":3742,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:47:13.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 20 23:47:14.266: INFO: Waiting up to 5m0s for pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1" in namespace "emptydir-9639" to be "success or failure"
Aug 20 23:47:14.350: INFO: Pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1": Phase="Pending", Reason="", readiness=false. Elapsed: 84.266462ms
Aug 20 23:47:16.354: INFO: Pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088310759s
Aug 20 23:47:18.357: INFO: Pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1": Phase="Running", Reason="", readiness=true. Elapsed: 4.091466912s
Aug 20 23:47:20.416: INFO: Pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149684788s
STEP: Saw pod success
Aug 20 23:47:20.416: INFO: Pod "pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1" satisfied condition "success or failure"
Aug 20 23:47:20.418: INFO: Trying to get logs from node jerma-worker pod pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1 container test-container: 
STEP: delete the pod
Aug 20 23:47:20.633: INFO: Waiting for pod pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1 to disappear
Aug 20 23:47:20.699: INFO: Pod pod-99fed455-5ccb-4eae-b271-be1feeb1ceb1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:47:20.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9639" for this suite.

• [SLOW TEST:6.991 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3762,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:47:20.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:47:21.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 20 23:47:22.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 create -f -'
Aug 20 23:47:27.956: INFO: stderr: ""
Aug 20 23:47:27.956: INFO: stdout: "e2e-test-crd-publish-openapi-6644-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 20 23:47:27.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 delete e2e-test-crd-publish-openapi-6644-crds test-foo'
Aug 20 23:47:28.085: INFO: stderr: ""
Aug 20 23:47:28.085: INFO: stdout: "e2e-test-crd-publish-openapi-6644-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 20 23:47:28.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 apply -f -'
Aug 20 23:47:28.305: INFO: stderr: ""
Aug 20 23:47:28.305: INFO: stdout: "e2e-test-crd-publish-openapi-6644-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 20 23:47:28.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 delete e2e-test-crd-publish-openapi-6644-crds test-foo'
Aug 20 23:47:28.410: INFO: stderr: ""
Aug 20 23:47:28.410: INFO: stdout: "e2e-test-crd-publish-openapi-6644-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 20 23:47:28.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 create -f -'
Aug 20 23:47:28.667: INFO: rc: 1
Aug 20 23:47:28.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 apply -f -'
Aug 20 23:47:28.950: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 20 23:47:28.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 create -f -'
Aug 20 23:47:29.201: INFO: rc: 1
Aug 20 23:47:29.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9261 apply -f -'
Aug 20 23:47:29.439: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 20 23:47:29.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6644-crds'
Aug 20 23:47:29.684: INFO: stderr: ""
Aug 20 23:47:29.684: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6644-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 20 23:47:29.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6644-crds.metadata'
Aug 20 23:47:29.961: INFO: stderr: ""
Aug 20 23:47:29.962: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6644-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 20 23:47:29.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6644-crds.spec'
Aug 20 23:47:30.210: INFO: stderr: ""
Aug 20 23:47:30.210: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6644-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 20 23:47:30.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6644-crds.spec.bars'
Aug 20 23:47:30.463: INFO: stderr: ""
Aug 20 23:47:30.463: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6644-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 20 23:47:30.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6644-crds.spec.bars2'
Aug 20 23:47:30.712: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:47:32.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9261" for this suite.

• [SLOW TEST:11.724 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":225,"skipped":3769,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:47:32.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 20 23:47:32.651: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:47:51.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-903" for this suite.

• [SLOW TEST:19.005 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3790,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:47:51.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-5462
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5462 to expose endpoints map[]
Aug 20 23:47:51.845: INFO: Get endpoints failed (3.837616ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 20 23:47:52.849: INFO: successfully validated that service multi-endpoint-test in namespace services-5462 exposes endpoints map[] (1.007199545s elapsed)
STEP: Creating pod pod1 in namespace services-5462
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5462 to expose endpoints map[pod1:[100]]
Aug 20 23:47:56.991: INFO: successfully validated that service multi-endpoint-test in namespace services-5462 exposes endpoints map[pod1:[100]] (4.135949082s elapsed)
STEP: Creating pod pod2 in namespace services-5462
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5462 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 20 23:48:00.047: INFO: successfully validated that service multi-endpoint-test in namespace services-5462 exposes endpoints map[pod1:[100] pod2:[101]] (3.051477592s elapsed)
STEP: Deleting pod pod1 in namespace services-5462
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5462 to expose endpoints map[pod2:[101]]
Aug 20 23:48:01.077: INFO: successfully validated that service multi-endpoint-test in namespace services-5462 exposes endpoints map[pod2:[101]] (1.026880052s elapsed)
STEP: Deleting pod pod2 in namespace services-5462
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5462 to expose endpoints map[]
Aug 20 23:48:02.108: INFO: successfully validated that service multi-endpoint-test in namespace services-5462 exposes endpoints map[] (1.024658434s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:02.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5462" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.810 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":227,"skipped":3803,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:02.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 20 23:48:03.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 20 23:48:05.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564083, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564083, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564083, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564083, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 20 23:48:08.209: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:08.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6843" for this suite.
STEP: Destroying namespace "webhook-6843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.022 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":228,"skipped":3807,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:08.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:15.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5460" for this suite.

• [SLOW TEST:7.078 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":229,"skipped":3814,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:15.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4901" for this suite.

• [SLOW TEST:32.867 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3818,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:48.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:52.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4922" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":231,"skipped":3828,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:52.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 20 23:48:52.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a" in namespace "projected-4209" to be "success or failure"
Aug 20 23:48:52.683: INFO: Pod "downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674889ms
Aug 20 23:48:54.830: INFO: Pod "downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150985448s
Aug 20 23:48:56.834: INFO: Pod "downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154975395s
STEP: Saw pod success
Aug 20 23:48:56.834: INFO: Pod "downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a" satisfied condition "success or failure"
Aug 20 23:48:56.837: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a container client-container: 
STEP: delete the pod
Aug 20 23:48:56.961: INFO: Waiting for pod downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a to disappear
Aug 20 23:48:56.965: INFO: Pod downwardapi-volume-2fff126c-2555-494b-a85f-6a182e5f582a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:48:56.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4209" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3832,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:48:56.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 20 23:48:57.028: INFO: Waiting up to 5m0s for pod "pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c" in namespace "emptydir-6157" to be "success or failure"
Aug 20 23:48:57.045: INFO: Pod "pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.331527ms
Aug 20 23:48:59.048: INFO: Pod "pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019833199s
Aug 20 23:49:01.052: INFO: Pod "pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02347242s
STEP: Saw pod success
Aug 20 23:49:01.052: INFO: Pod "pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c" satisfied condition "success or failure"
Aug 20 23:49:01.054: INFO: Trying to get logs from node jerma-worker2 pod pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c container test-container: 
STEP: delete the pod
Aug 20 23:49:01.094: INFO: Waiting for pod pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c to disappear
Aug 20 23:49:01.129: INFO: Pod pod-a4d0bc5b-d82b-49ef-9ea8-1a255dc1738c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:49:01.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6157" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3841,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:49:01.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7417 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7417;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7417 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7417;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7417.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7417.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7417.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7417.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 55.93.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.93.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.93.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.93.55_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7417 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7417;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7417 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7417;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7417.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7417.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7417.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7417.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 55.93.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.93.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.93.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.93.55_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 20 23:49:07.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.365: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.378: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.381: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.399: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.401: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.404: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.409: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.417: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:07.432: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:12.436: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.440: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.446: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.450: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.453: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.457: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.459: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.482: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.485: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.488: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.494: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.497: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.499: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.502: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:12.519: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:17.436: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.438: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.446: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.449: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.479: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.482: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.485: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.490: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.512: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.518: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:17.537: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:22.436: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.438: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.441: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.445: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.475: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.478: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.480: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.486: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.492: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.495: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:22.515: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:27.437: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.441: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.449: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.478: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.481: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.483: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.489: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.529: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.537: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:27.643: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:32.435: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.438: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.441: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.446: INFO: Unable to read wheezy_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.448: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.453: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.474: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.477: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.480: INFO: Unable to read jessie_udp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417 from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.487: INFO: Unable to read jessie_udp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.504: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.506: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc from pod dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7: the server could not find the requested resource (get pods dns-test-7262849a-9f09-4626-b2c3-75d1429862c7)
Aug 20 23:49:32.519: INFO: Lookups using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7417 wheezy_tcp@dns-test-service.dns-7417 wheezy_udp@dns-test-service.dns-7417.svc wheezy_tcp@dns-test-service.dns-7417.svc wheezy_udp@_http._tcp.dns-test-service.dns-7417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7417 jessie_tcp@dns-test-service.dns-7417 jessie_udp@dns-test-service.dns-7417.svc jessie_tcp@dns-test-service.dns-7417.svc jessie_udp@_http._tcp.dns-test-service.dns-7417.svc jessie_tcp@_http._tcp.dns-test-service.dns-7417.svc]

Aug 20 23:49:37.522: INFO: DNS probes using dns-7417/dns-test-7262849a-9f09-4626-b2c3-75d1429862c7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:49:38.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7417" for this suite.

• [SLOW TEST:37.388 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":234,"skipped":3855,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:49:38.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-pq98
STEP: Creating a pod to test atomic-volume-subpath
Aug 20 23:49:38.784: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pq98" in namespace "subpath-2813" to be "success or failure"
Aug 20 23:49:38.860: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Pending", Reason="", readiness=false. Elapsed: 75.523819ms
Aug 20 23:49:40.864: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079224457s
Aug 20 23:49:42.867: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 4.083076378s
Aug 20 23:49:44.871: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 6.086915208s
Aug 20 23:49:46.875: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 8.090951773s
Aug 20 23:49:48.880: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 10.09519826s
Aug 20 23:49:51.106: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 12.321353176s
Aug 20 23:49:53.110: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 14.325482757s
Aug 20 23:49:55.113: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 16.329021384s
Aug 20 23:49:57.117: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 18.332736822s
Aug 20 23:49:59.121: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 20.336677206s
Aug 20 23:50:01.124: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 22.339634826s
Aug 20 23:50:03.129: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Running", Reason="", readiness=true. Elapsed: 24.344907223s
Aug 20 23:50:05.159: INFO: Pod "pod-subpath-test-downwardapi-pq98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.374786917s
STEP: Saw pod success
Aug 20 23:50:05.159: INFO: Pod "pod-subpath-test-downwardapi-pq98" satisfied condition "success or failure"
Aug 20 23:50:05.162: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-pq98 container test-container-subpath-downwardapi-pq98: 
STEP: delete the pod
Aug 20 23:50:05.203: INFO: Waiting for pod pod-subpath-test-downwardapi-pq98 to disappear
Aug 20 23:50:05.207: INFO: Pod pod-subpath-test-downwardapi-pq98 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-pq98
Aug 20 23:50:05.207: INFO: Deleting pod "pod-subpath-test-downwardapi-pq98" in namespace "subpath-2813"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:50:05.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2813" for this suite.

• [SLOW TEST:26.666 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":235,"skipped":3873,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:50:05.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 20 23:50:09.841: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4700 pod-service-account-808b0349-291f-46ea-99ff-1ea1386c0e17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 20 23:50:10.039: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4700 pod-service-account-808b0349-291f-46ea-99ff-1ea1386c0e17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 20 23:50:10.245: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4700 pod-service-account-808b0349-291f-46ea-99ff-1ea1386c0e17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:50:10.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4700" for this suite.

• [SLOW TEST:5.278 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":236,"skipped":3876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:50:10.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 in namespace container-probe-7657
Aug 20 23:50:16.740: INFO: Started pod liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 in namespace container-probe-7657
STEP: checking the pod's current state and verifying that restartCount is present
Aug 20 23:50:16.743: INFO: Initial restart count of pod liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is 0
Aug 20 23:50:36.786: INFO: Restart count of pod container-probe-7657/liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is now 1 (20.042964906s elapsed)
Aug 20 23:50:55.368: INFO: Restart count of pod container-probe-7657/liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is now 2 (38.625014274s elapsed)
Aug 20 23:51:17.509: INFO: Restart count of pod container-probe-7657/liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is now 3 (1m0.765855196s elapsed)
Aug 20 23:51:35.620: INFO: Restart count of pod container-probe-7657/liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is now 4 (1m18.876639713s elapsed)
Aug 20 23:52:50.256: INFO: Restart count of pod container-probe-7657/liveness-690152be-b7c5-4ee8-9ca3-b353cda7f2e0 is now 5 (2m33.513038725s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:52:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7657" for this suite.

• [SLOW TEST:159.803 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3902,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:52:50.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 20 23:52:50.468: INFO: PodSpec: initContainers in spec.initContainers
Aug 20 23:53:41.895: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e0737ed7-8911-4da9-855a-2aaf602f0755", GenerateName:"", Namespace:"init-container-8837", SelfLink:"/api/v1/namespaces/init-container-8837/pods/pod-init-e0737ed7-8911-4da9-855a-2aaf602f0755", UID:"8f596eee-acb9-4309-943b-31a937ec9764", ResourceVersion:"1964021", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733564370, loc:(*time.Location)(0x7931640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"468218621"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dnvlh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002aa0000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dnvlh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dnvlh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dnvlh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003844068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003884000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0038440f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003844110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003844118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00384411c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564370, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564370, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564370, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733564370, loc:(*time.Location)(0x7931640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.121", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.121"}}, StartTime:(*v1.Time)(0xc0026420a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fb0070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fb00e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e449b1d5841782fb689f646c7123830f72d4afdd1b7b530a53dda3148bbdb1a1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002642120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026420e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00384419f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:53:41.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8837" for this suite.

• [SLOW TEST:51.663 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":238,"skipped":3906,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:53:41.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 20 23:53:49.576: INFO: Successfully updated pod "pod-update-6b42d2b8-efcc-4a7d-9ec4-30e59bc203af"
STEP: verifying the updated pod is in kubernetes
Aug 20 23:53:49.597: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:53:49.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4852" for this suite.

• [SLOW TEST:7.644 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3919,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:53:49.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 20 23:53:49.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7079'
Aug 20 23:53:49.747: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 23:53:49.747: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Aug 20 23:53:49.779: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-gmpxj]
Aug 20 23:53:49.780: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-gmpxj" in namespace "kubectl-7079" to be "running and ready"
Aug 20 23:53:49.844: INFO: Pod "e2e-test-httpd-rc-gmpxj": Phase="Pending", Reason="", readiness=false. Elapsed: 64.153401ms
Aug 20 23:53:51.964: INFO: Pod "e2e-test-httpd-rc-gmpxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184482123s
Aug 20 23:53:53.969: INFO: Pod "e2e-test-httpd-rc-gmpxj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189042553s
Aug 20 23:53:55.973: INFO: Pod "e2e-test-httpd-rc-gmpxj": Phase="Running", Reason="", readiness=true. Elapsed: 6.193075316s
Aug 20 23:53:55.973: INFO: Pod "e2e-test-httpd-rc-gmpxj" satisfied condition "running and ready"
Aug 20 23:53:55.973: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-gmpxj]
Aug 20 23:53:55.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7079'
Aug 20 23:53:56.106: INFO: stderr: ""
Aug 20 23:53:56.106: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.127. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.127. Set the 'ServerName' directive globally to suppress this message\n[Thu Aug 20 23:53:53.188131 2020] [mpm_event:notice] [pid 1:tid 140648528231272] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Aug 20 23:53:53.188188 2020] [core:notice] [pid 1:tid 140648528231272] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Aug 20 23:53:56.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7079'
Aug 20 23:53:56.206: INFO: stderr: ""
Aug 20 23:53:56.206: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:53:56.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7079" for this suite.

• [SLOW TEST:6.607 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":240,"skipped":3936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:53:56.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 20 23:53:56.273: INFO: Waiting up to 5m0s for pod "downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d" in namespace "downward-api-4134" to be "success or failure"
Aug 20 23:53:56.292: INFO: Pod "downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.923879ms
Aug 20 23:53:58.437: INFO: Pod "downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164445641s
Aug 20 23:54:00.441: INFO: Pod "downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167975923s
STEP: Saw pod success
Aug 20 23:54:00.441: INFO: Pod "downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d" satisfied condition "success or failure"
Aug 20 23:54:00.443: INFO: Trying to get logs from node jerma-worker pod downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d container dapi-container: 
STEP: delete the pod
Aug 20 23:54:00.541: INFO: Waiting for pod downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d to disappear
Aug 20 23:54:00.580: INFO: Pod downward-api-329ca2a9-b5f3-49d1-b643-b7f2b1cbf48d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:54:00.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4134" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3960,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:54:00.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 20 23:54:00.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3283'
Aug 20 23:54:00.784: INFO: stderr: ""
Aug 20 23:54:00.785: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 20 23:54:05.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3283 -o json'
Aug 20 23:54:05.937: INFO: stderr: ""
Aug 20 23:54:05.937: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-20T23:54:00Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3283\",\n        \"resourceVersion\": \"1964185\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3283/pods/e2e-test-httpd-pod\",\n        \"uid\": \"0ebffc77-cb08-4a78-9581-cab824491781\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-pkh4l\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-pkh4l\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-pkh4l\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-20T23:54:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-20T23:54:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-20T23:54:04Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-20T23:54:00Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1091001bf600148716164e9a5609dcd6329eb3884667b486031d8ff1674b5973\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-20T23:54:03Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.123\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.123\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-20T23:54:00Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 20 23:54:05.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3283'
Aug 20 23:54:06.249: INFO: stderr: ""
Aug 20 23:54:06.249: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 20 23:54:06.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3283'
Aug 20 23:54:23.764: INFO: stderr: ""
Aug 20 23:54:23.764: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:54:23.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3283" for this suite.

• [SLOW TEST:23.500 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":242,"skipped":3973,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:54:24.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-e03c71c8-1d12-42a7-b7e0-cc0b548590f7 in namespace container-probe-2455
Aug 20 23:54:34.694: INFO: Started pod test-webserver-e03c71c8-1d12-42a7-b7e0-cc0b548590f7 in namespace container-probe-2455
STEP: checking the pod's current state and verifying that restartCount is present
Aug 20 23:54:34.697: INFO: Initial restart count of pod test-webserver-e03c71c8-1d12-42a7-b7e0-cc0b548590f7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:58:35.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2455" for this suite.

• [SLOW TEST:251.596 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3986,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:58:35.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 20 23:58:35.767: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:58:37.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-434" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":244,"skipped":4038,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:58:37.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 20 23:58:37.141: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 20 23:58:37.164: INFO: Waiting for terminating namespaces to be deleted...
Aug 20 23:58:37.167: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 20 23:58:37.184: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.184: INFO: 	Container app ready: true, restart count 0
Aug 20 23:58:37.184: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.184: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 23:58:37.184: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.184: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 23:58:37.184: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 20 23:58:37.199: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.200: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 23:58:37.200: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.200: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 23:58:37.200: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 20 23:58:37.200: INFO: 	Container app ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 20 23:58:37.310: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker
Aug 20 23:58:37.310: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2
Aug 20 23:58:37.310: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 20 23:58:37.310: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 20 23:58:37.310: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 20 23:58:37.310: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 20 23:58:37.310: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 20 23:58:37.315: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea.162d1f5abc1a16c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4057/filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea.162d1f5b42c8b447], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea.162d1f5b7910d9ac], Reason = [Created], Message = [Created container filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea.162d1f5b8c07b933], Reason = [Started], Message = [Started container filler-pod-3d75b215-240c-406a-9358-1c155a3d3aea]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c.162d1f5aba97f025], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4057/filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c.162d1f5b048eb679], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c.162d1f5b4ed632b7], Reason = [Created], Message = [Created container filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c.162d1f5b61f6b334], Reason = [Started], Message = [Started container filler-pod-7c6786e5-b265-41f2-972c-2b04a9e8840c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d1f5bab79457f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:58:42.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4057" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:5.372 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":245,"skipped":4088,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:58:42.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:58:58.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8932" for this suite.

• [SLOW TEST:16.310 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":246,"skipped":4095,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:58:58.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 20 23:58:58.889: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:58:58.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4758" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":247,"skipped":4101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:58:58.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 20 23:58:59.088: INFO: Waiting up to 5m0s for pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec" in namespace "emptydir-2807" to be "success or failure"
Aug 20 23:58:59.103: INFO: Pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec": Phase="Pending", Reason="", readiness=false. Elapsed: 14.817019ms
Aug 20 23:59:01.106: INFO: Pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017968961s
Aug 20 23:59:03.110: INFO: Pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021814699s
Aug 20 23:59:05.114: INFO: Pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025785228s
STEP: Saw pod success
Aug 20 23:59:05.114: INFO: Pod "pod-c5447c98-d992-4171-9244-2bc5a08c29ec" satisfied condition "success or failure"
Aug 20 23:59:05.117: INFO: Trying to get logs from node jerma-worker2 pod pod-c5447c98-d992-4171-9244-2bc5a08c29ec container test-container: 
STEP: delete the pod
Aug 20 23:59:05.135: INFO: Waiting for pod pod-c5447c98-d992-4171-9244-2bc5a08c29ec to disappear
Aug 20 23:59:05.151: INFO: Pod pod-c5447c98-d992-4171-9244-2bc5a08c29ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 20 23:59:05.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2807" for this suite.

• [SLOW TEST:6.181 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 20 23:59:05.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-271
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 20 23:59:05.274: INFO: Found 0 stateful pods, waiting for 3
Aug 20 23:59:15.278: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:59:15.278: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:59:15.278: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 20 23:59:25.278: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:59:25.278: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:59:25.279: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 23:59:25.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-271 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 20 23:59:28.101: INFO: stderr: "I0820 23:59:27.964030    4247 log.go:172] (0xc0000e5080) (0xc000669f40) Create stream\nI0820 23:59:27.964074    4247 log.go:172] (0xc0000e5080) (0xc000669f40) Stream added, broadcasting: 1\nI0820 23:59:27.967651    4247 log.go:172] (0xc0000e5080) Reply frame received for 1\nI0820 23:59:27.967706    4247 log.go:172] (0xc0000e5080) (0xc0005de6e0) Create stream\nI0820 23:59:27.967723    4247 log.go:172] (0xc0000e5080) (0xc0005de6e0) Stream added, broadcasting: 3\nI0820 23:59:27.968690    4247 log.go:172] (0xc0000e5080) Reply frame received for 3\nI0820 23:59:27.968823    4247 log.go:172] (0xc0000e5080) (0xc0007834a0) Create stream\nI0820 23:59:27.968845    4247 log.go:172] (0xc0000e5080) (0xc0007834a0) Stream added, broadcasting: 5\nI0820 23:59:27.969992    4247 log.go:172] (0xc0000e5080) Reply frame received for 5\nI0820 23:59:28.049872    4247 log.go:172] (0xc0000e5080) Data frame received for 5\nI0820 23:59:28.049895    4247 log.go:172] (0xc0007834a0) (5) Data frame handling\nI0820 23:59:28.049906    4247 log.go:172] (0xc0007834a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0820 23:59:28.087514    4247 log.go:172] (0xc0000e5080) Data frame received for 3\nI0820 23:59:28.087558    4247 log.go:172] (0xc0005de6e0) (3) Data frame handling\nI0820 23:59:28.087589    4247 log.go:172] (0xc0005de6e0) (3) Data frame sent\nI0820 23:59:28.087681    4247 log.go:172] (0xc0000e5080) Data frame received for 3\nI0820 23:59:28.087697    4247 log.go:172] (0xc0005de6e0) (3) Data frame handling\nI0820 23:59:28.087856    4247 log.go:172] (0xc0000e5080) Data frame received for 5\nI0820 23:59:28.087879    4247 log.go:172] (0xc0007834a0) (5) Data frame handling\nI0820 23:59:28.090233    4247 log.go:172] (0xc0000e5080) Data frame received for 1\nI0820 23:59:28.090263    4247 log.go:172] (0xc000669f40) (1) Data frame handling\nI0820 23:59:28.090286    4247 log.go:172] (0xc000669f40) (1) Data frame sent\nI0820 23:59:28.090308    4247 log.go:172] (0xc0000e5080) (0xc000669f40) Stream removed, broadcasting: 1\nI0820 23:59:28.090345    4247 log.go:172] (0xc0000e5080) Go away received\nI0820 23:59:28.090765    4247 log.go:172] (0xc0000e5080) (0xc000669f40) Stream removed, broadcasting: 1\nI0820 23:59:28.090786    4247 log.go:172] (0xc0000e5080) (0xc0005de6e0) Stream removed, broadcasting: 3\nI0820 23:59:28.090809    4247 log.go:172] (0xc0000e5080) (0xc0007834a0) Stream removed, broadcasting: 5\n"
Aug 20 23:59:28.101: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 20 23:59:28.101: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 20 23:59:38.133: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 20 23:59:48.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-271 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 20 23:59:48.448: INFO: stderr: "I0820 23:59:48.331069    4275 log.go:172] (0xc0009c0000) (0xc00097a000) Create stream\nI0820 23:59:48.331159    4275 log.go:172] (0xc0009c0000) (0xc00097a000) Stream added, broadcasting: 1\nI0820 23:59:48.334381    4275 log.go:172] (0xc0009c0000) Reply frame received for 1\nI0820 23:59:48.334718    4275 log.go:172] (0xc0009c0000) (0xc00089a000) Create stream\nI0820 23:59:48.334750    4275 log.go:172] (0xc0009c0000) (0xc00089a000) Stream added, broadcasting: 3\nI0820 23:59:48.336353    4275 log.go:172] (0xc0009c0000) Reply frame received for 3\nI0820 23:59:48.336391    4275 log.go:172] (0xc0009c0000) (0xc00089a0a0) Create stream\nI0820 23:59:48.336402    4275 log.go:172] (0xc0009c0000) (0xc00089a0a0) Stream added, broadcasting: 5\nI0820 23:59:48.337588    4275 log.go:172] (0xc0009c0000) Reply frame received for 5\nI0820 23:59:48.438325    4275 log.go:172] (0xc0009c0000) Data frame received for 5\nI0820 23:59:48.438356    4275 log.go:172] (0xc00089a0a0) (5) Data frame handling\nI0820 23:59:48.438368    4275 log.go:172] (0xc00089a0a0) (5) Data frame sent\nI0820 23:59:48.438377    4275 log.go:172] (0xc0009c0000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0820 23:59:48.438407    4275 log.go:172] (0xc0009c0000) Data frame received for 3\nI0820 23:59:48.438444    4275 log.go:172] (0xc00089a000) (3) Data frame handling\nI0820 23:59:48.438462    4275 log.go:172] (0xc00089a000) (3) Data frame sent\nI0820 23:59:48.438476    4275 log.go:172] (0xc0009c0000) Data frame received for 3\nI0820 23:59:48.438489    4275 log.go:172] (0xc00089a000) (3) Data frame handling\nI0820 23:59:48.438545    4275 log.go:172] (0xc00089a0a0) (5) Data frame handling\nI0820 23:59:48.439801    4275 log.go:172] (0xc0009c0000) Data frame received for 1\nI0820 23:59:48.439819    4275 log.go:172] (0xc00097a000) (1) Data frame handling\nI0820 23:59:48.439830    4275 log.go:172] (0xc00097a000) (1) Data frame sent\nI0820 23:59:48.439844    4275 log.go:172] (0xc0009c0000) (0xc00097a000) Stream removed, broadcasting: 1\nI0820 23:59:48.439888    4275 log.go:172] (0xc0009c0000) Go away received\nI0820 23:59:48.440161    4275 log.go:172] (0xc0009c0000) (0xc00097a000) Stream removed, broadcasting: 1\nI0820 23:59:48.440180    4275 log.go:172] (0xc0009c0000) (0xc00089a000) Stream removed, broadcasting: 3\nI0820 23:59:48.440188    4275 log.go:172] (0xc0009c0000) (0xc00089a0a0) Stream removed, broadcasting: 5\n"
Aug 20 23:59:48.448: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 20 23:59:48.448: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

STEP: Rolling back to a previous revision
Aug 21 00:00:10.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-271 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 00:00:11.057: INFO: stderr: "I0821 00:00:10.920326    4294 log.go:172] (0xc0000262c0) (0xc00022b5e0) Create stream\nI0821 00:00:10.920406    4294 log.go:172] (0xc0000262c0) (0xc00022b5e0) Stream added, broadcasting: 1\nI0821 00:00:10.923086    4294 log.go:172] (0xc0000262c0) Reply frame received for 1\nI0821 00:00:10.923131    4294 log.go:172] (0xc0000262c0) (0xc0008d2000) Create stream\nI0821 00:00:10.923156    4294 log.go:172] (0xc0000262c0) (0xc0008d2000) Stream added, broadcasting: 3\nI0821 00:00:10.923944    4294 log.go:172] (0xc0000262c0) Reply frame received for 3\nI0821 00:00:10.923971    4294 log.go:172] (0xc0000262c0) (0xc0009c0000) Create stream\nI0821 00:00:10.923980    4294 log.go:172] (0xc0000262c0) (0xc0009c0000) Stream added, broadcasting: 5\nI0821 00:00:10.924939    4294 log.go:172] (0xc0000262c0) Reply frame received for 5\nI0821 00:00:11.000011    4294 log.go:172] (0xc0000262c0) Data frame received for 5\nI0821 00:00:11.000038    4294 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0821 00:00:11.000051    4294 log.go:172] (0xc0009c0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:00:11.046472    4294 log.go:172] (0xc0000262c0) Data frame received for 3\nI0821 00:00:11.046502    4294 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0821 00:00:11.046527    4294 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0821 00:00:11.046639    4294 log.go:172] (0xc0000262c0) Data frame received for 3\nI0821 00:00:11.046662    4294 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0821 00:00:11.046683    4294 log.go:172] (0xc0000262c0) Data frame received for 5\nI0821 00:00:11.046694    4294 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0821 00:00:11.047969    4294 log.go:172] (0xc0000262c0) Data frame received for 1\nI0821 00:00:11.047993    4294 log.go:172] (0xc00022b5e0) (1) Data frame handling\nI0821 00:00:11.048014    4294 log.go:172] (0xc00022b5e0) (1) Data frame sent\nI0821 00:00:11.048032    4294 log.go:172] (0xc0000262c0) (0xc00022b5e0) Stream removed, broadcasting: 1\nI0821 00:00:11.048050    4294 log.go:172] (0xc0000262c0) Go away received\nI0821 00:00:11.048386    4294 log.go:172] (0xc0000262c0) (0xc00022b5e0) Stream removed, broadcasting: 1\nI0821 00:00:11.048409    4294 log.go:172] (0xc0000262c0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0821 00:00:11.048418    4294 log.go:172] (0xc0000262c0) (0xc0009c0000) Stream removed, broadcasting: 5\n"
Aug 21 00:00:11.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 00:00:11.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 00:00:21.087: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 21 00:00:31.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-271 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 00:00:31.643: INFO: stderr: "I0821 00:00:31.551764    4314 log.go:172] (0xc000a48420) (0xc000531720) Create stream\nI0821 00:00:31.551824    4314 log.go:172] (0xc000a48420) (0xc000531720) Stream added, broadcasting: 1\nI0821 00:00:31.553682    4314 log.go:172] (0xc000a48420) Reply frame received for 1\nI0821 00:00:31.553740    4314 log.go:172] (0xc000a48420) (0xc000791cc0) Create stream\nI0821 00:00:31.553756    4314 log.go:172] (0xc000a48420) (0xc000791cc0) Stream added, broadcasting: 3\nI0821 00:00:31.554827    4314 log.go:172] (0xc000a48420) Reply frame received for 3\nI0821 00:00:31.554880    4314 log.go:172] (0xc000a48420) (0xc0009da000) Create stream\nI0821 00:00:31.554900    4314 log.go:172] (0xc000a48420) (0xc0009da000) Stream added, broadcasting: 5\nI0821 00:00:31.555784    4314 log.go:172] (0xc000a48420) Reply frame received for 5\nI0821 00:00:31.632267    4314 log.go:172] (0xc000a48420) Data frame received for 3\nI0821 00:00:31.632310    4314 log.go:172] (0xc000791cc0) (3) Data frame handling\nI0821 00:00:31.632332    4314 log.go:172] (0xc000791cc0) (3) Data frame sent\nI0821 00:00:31.632342    4314 log.go:172] (0xc000a48420) Data frame received for 3\nI0821 00:00:31.632353    4314 log.go:172] (0xc000791cc0) (3) Data frame handling\nI0821 00:00:31.632407    4314 log.go:172] (0xc000a48420) Data frame received for 5\nI0821 00:00:31.632423    4314 log.go:172] (0xc0009da000) (5) Data frame handling\nI0821 00:00:31.632434    4314 log.go:172] (0xc0009da000) (5) Data frame sent\nI0821 00:00:31.632440    4314 log.go:172] (0xc000a48420) Data frame received for 5\nI0821 00:00:31.632445    4314 log.go:172] (0xc0009da000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:00:31.634462    4314 log.go:172] (0xc000a48420) Data frame received for 1\nI0821 00:00:31.634505    4314 log.go:172] (0xc000531720) (1) Data frame handling\nI0821 00:00:31.634530    4314 log.go:172] (0xc000531720) (1) Data frame sent\nI0821 00:00:31.634553    4314 log.go:172] (0xc000a48420) (0xc000531720) Stream removed, broadcasting: 1\nI0821 00:00:31.634660    4314 log.go:172] (0xc000a48420) Go away received\nI0821 00:00:31.635018    4314 log.go:172] (0xc000a48420) (0xc000531720) Stream removed, broadcasting: 1\nI0821 00:00:31.635044    4314 log.go:172] (0xc000a48420) (0xc000791cc0) Stream removed, broadcasting: 3\nI0821 00:00:31.635057    4314 log.go:172] (0xc000a48420) (0xc0009da000) Stream removed, broadcasting: 5\n"
Aug 21 00:00:31.643: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 00:00:31.643: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 00:00:41.669: INFO: Waiting for StatefulSet statefulset-271/ss2 to complete update
Aug 21 00:00:41.669: INFO: Waiting for Pod statefulset-271/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 00:00:41.669: INFO: Waiting for Pod statefulset-271/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 00:00:51.705: INFO: Waiting for StatefulSet statefulset-271/ss2 to complete update
Aug 21 00:00:51.705: INFO: Waiting for Pod statefulset-271/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 00:01:01.743: INFO: Waiting for StatefulSet statefulset-271/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 00:01:11.677: INFO: Deleting all statefulset in ns statefulset-271
Aug 21 00:01:11.680: INFO: Scaling statefulset ss2 to 0
Aug 21 00:01:31.745: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 00:01:31.748: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:01:31.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-271" for this suite.

• [SLOW TEST:146.608 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":249,"skipped":4172,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:01:31.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-np2p
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 00:01:31.944: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-np2p" in namespace "subpath-9112" to be "success or failure"
Aug 21 00:01:31.949: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.598638ms
Aug 21 00:01:33.997: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052984087s
Aug 21 00:01:36.006: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 4.062039824s
Aug 21 00:01:38.010: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 6.065816405s
Aug 21 00:01:40.014: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 8.069824606s
Aug 21 00:01:42.017: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 10.073007905s
Aug 21 00:01:44.021: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 12.076771569s
Aug 21 00:01:46.024: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 14.080372792s
Aug 21 00:01:48.029: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 16.085070256s
Aug 21 00:01:50.033: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 18.088603318s
Aug 21 00:01:52.035: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.091062693s
Aug 21 00:01:54.040: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Running", Reason="", readiness=true. Elapsed: 22.095599761s
Aug 21 00:01:56.043: INFO: Pod "pod-subpath-test-secret-np2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.099425538s
STEP: Saw pod success
Aug 21 00:01:56.044: INFO: Pod "pod-subpath-test-secret-np2p" satisfied condition "success or failure"
Aug 21 00:01:56.047: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-np2p container test-container-subpath-secret-np2p: 
STEP: delete the pod
Aug 21 00:01:56.073: INFO: Waiting for pod pod-subpath-test-secret-np2p to disappear
Aug 21 00:01:56.078: INFO: Pod pod-subpath-test-secret-np2p no longer exists
STEP: Deleting pod pod-subpath-test-secret-np2p
Aug 21 00:01:56.078: INFO: Deleting pod "pod-subpath-test-secret-np2p" in namespace "subpath-9112"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:01:56.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9112" for this suite.

• [SLOW TEST:24.340 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":250,"skipped":4175,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:01:56.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 21 00:01:56.210: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6306" to be "success or failure"
Aug 21 00:01:56.284: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 74.280478ms
Aug 21 00:01:58.288: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078211183s
Aug 21 00:02:00.291: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081624136s
Aug 21 00:02:02.295: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085575643s
STEP: Saw pod success
Aug 21 00:02:02.296: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 21 00:02:02.299: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 21 00:02:02.331: INFO: Waiting for pod pod-host-path-test to disappear
Aug 21 00:02:02.336: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:02:02.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6306" for this suite.

• [SLOW TEST:6.230 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4176,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:02:02.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-6e0955f2-52f5-4e9a-9acc-2ae124bab280
STEP: Creating a pod to test consume configMaps
Aug 21 00:02:02.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76" in namespace "projected-8503" to be "success or failure"
Aug 21 00:02:02.512: INFO: Pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76": Phase="Pending", Reason="", readiness=false. Elapsed: 73.258827ms
Aug 21 00:02:04.516: INFO: Pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077145881s
Aug 21 00:02:06.519: INFO: Pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080350095s
Aug 21 00:02:08.523: INFO: Pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084190685s
STEP: Saw pod success
Aug 21 00:02:08.523: INFO: Pod "pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76" satisfied condition "success or failure"
Aug 21 00:02:08.527: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 00:02:08.559: INFO: Waiting for pod pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76 to disappear
Aug 21 00:02:08.578: INFO: Pod pod-projected-configmaps-290f618f-fb65-402a-ac35-47a59493dd76 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:02:08.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8503" for this suite.

• [SLOW TEST:6.243 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4179,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:02:08.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 21 00:02:08.657: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 00:02:08.669: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 00:02:08.671: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 21 00:02:08.676: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.676: INFO: 	Container app ready: true, restart count 0
Aug 21 00:02:08.676: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.676: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 00:02:08.676: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.676: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 00:02:08.676: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 21 00:02:08.682: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.682: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 00:02:08.682: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.682: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 00:02:08.682: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 21 00:02:08.682: INFO: 	Container app ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5b6fd955-b9ba-4dbd-8dc9-e7409dafce01 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-5b6fd955-b9ba-4dbd-8dc9-e7409dafce01 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5b6fd955-b9ba-4dbd-8dc9-e7409dafce01
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:02:31.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5296" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:22.802 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":253,"skipped":4183,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:02:31.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:02:31.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1" in namespace "downward-api-6887" to be "success or failure"
Aug 21 00:02:31.598: INFO: Pod "downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.769696ms
Aug 21 00:02:33.603: INFO: Pod "downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026347434s
Aug 21 00:02:36.752: INFO: Pod "downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.175951072s
STEP: Saw pod success
Aug 21 00:02:36.753: INFO: Pod "downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1" satisfied condition "success or failure"
Aug 21 00:02:36.758: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1 container client-container: 
STEP: delete the pod
Aug 21 00:02:37.091: INFO: Waiting for pod downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1 to disappear
Aug 21 00:02:37.152: INFO: Pod downwardapi-volume-50f90694-eabe-470f-b150-c87b400046c1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:02:37.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6887" for this suite.

• [SLOW TEST:5.796 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:02:37.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:02:37.221: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 21 00:02:38.479: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:02:38.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9601" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":255,"skipped":4246,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:02:38.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7862
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7862
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7862
Aug 21 00:02:40.135: INFO: Found 0 stateful pods, waiting for 1
Aug 21 00:02:50.232: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 21 00:02:50.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 00:02:50.629: INFO: stderr: "I0821 00:02:50.493105    4335 log.go:172] (0xc000a8a000) (0xc0004df680) Create stream\nI0821 00:02:50.493161    4335 log.go:172] (0xc000a8a000) (0xc0004df680) Stream added, broadcasting: 1\nI0821 00:02:50.494980    4335 log.go:172] (0xc000a8a000) Reply frame received for 1\nI0821 00:02:50.495013    4335 log.go:172] (0xc000a8a000) (0xc0009bc000) Create stream\nI0821 00:02:50.495020    4335 log.go:172] (0xc000a8a000) (0xc0009bc000) Stream added, broadcasting: 3\nI0821 00:02:50.495688    4335 log.go:172] (0xc000a8a000) Reply frame received for 3\nI0821 00:02:50.495719    4335 log.go:172] (0xc000a8a000) (0xc00096a000) Create stream\nI0821 00:02:50.495734    4335 log.go:172] (0xc000a8a000) (0xc00096a000) Stream added, broadcasting: 5\nI0821 00:02:50.496403    4335 log.go:172] (0xc000a8a000) Reply frame received for 5\nI0821 00:02:50.542697    4335 log.go:172] (0xc000a8a000) Data frame received for 5\nI0821 00:02:50.542714    4335 log.go:172] (0xc00096a000) (5) Data frame handling\nI0821 00:02:50.542723    4335 log.go:172] (0xc00096a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:02:50.616109    4335 log.go:172] (0xc000a8a000) Data frame received for 3\nI0821 00:02:50.616140    4335 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0821 00:02:50.616173    4335 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0821 00:02:50.616195    4335 log.go:172] (0xc000a8a000) Data frame received for 3\nI0821 00:02:50.616207    4335 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0821 00:02:50.616238    4335 log.go:172] (0xc000a8a000) Data frame received for 5\nI0821 00:02:50.616260    4335 log.go:172] (0xc00096a000) (5) Data frame handling\nI0821 00:02:50.618659    4335 log.go:172] (0xc000a8a000) Data frame received for 1\nI0821 00:02:50.618686    4335 log.go:172] (0xc0004df680) (1) Data frame handling\nI0821 00:02:50.618698    4335 log.go:172] (0xc0004df680) (1) Data frame sent\nI0821 00:02:50.618711    4335 log.go:172] (0xc000a8a000) (0xc0004df680) Stream removed, broadcasting: 1\nI0821 00:02:50.618851    4335 log.go:172] (0xc000a8a000) Go away received\nI0821 00:02:50.619092    4335 log.go:172] (0xc000a8a000) (0xc0004df680) Stream removed, broadcasting: 1\nI0821 00:02:50.619120    4335 log.go:172] (0xc000a8a000) (0xc0009bc000) Stream removed, broadcasting: 3\nI0821 00:02:50.619133    4335 log.go:172] (0xc000a8a000) (0xc00096a000) Stream removed, broadcasting: 5\n"
Aug 21 00:02:50.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 00:02:50.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 00:02:50.636: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 21 00:03:00.642: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 00:03:00.642: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 00:03:00.663: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999605s
Aug 21 00:03:01.668: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988377348s
Aug 21 00:03:02.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983216425s
Aug 21 00:03:03.678: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978576524s
Aug 21 00:03:04.682: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.973772809s
Aug 21 00:03:05.686: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.969572281s
Aug 21 00:03:06.691: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.965096073s
Aug 21 00:03:07.696: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.960681147s
Aug 21 00:03:08.704: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.955666666s
Aug 21 00:03:09.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 946.903368ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7862
Aug 21 00:03:10.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 00:03:10.926: INFO: stderr: "I0821 00:03:10.832016    4355 log.go:172] (0xc0001196b0) (0xc000a82000) Create stream\nI0821 00:03:10.832060    4355 log.go:172] (0xc0001196b0) (0xc000a82000) Stream added, broadcasting: 1\nI0821 00:03:10.834203    4355 log.go:172] (0xc0001196b0) Reply frame received for 1\nI0821 00:03:10.834243    4355 log.go:172] (0xc0001196b0) (0xc00066bb80) Create stream\nI0821 00:03:10.834258    4355 log.go:172] (0xc0001196b0) (0xc00066bb80) Stream added, broadcasting: 3\nI0821 00:03:10.835365    4355 log.go:172] (0xc0001196b0) Reply frame received for 3\nI0821 00:03:10.835400    4355 log.go:172] (0xc0001196b0) (0xc00026a000) Create stream\nI0821 00:03:10.835421    4355 log.go:172] (0xc0001196b0) (0xc00026a000) Stream added, broadcasting: 5\nI0821 00:03:10.836267    4355 log.go:172] (0xc0001196b0) Reply frame received for 5\nI0821 00:03:10.914318    4355 log.go:172] (0xc0001196b0) Data frame received for 5\nI0821 00:03:10.914359    4355 log.go:172] (0xc00026a000) (5) Data frame handling\nI0821 00:03:10.914372    4355 log.go:172] (0xc00026a000) (5) Data frame sent\nI0821 00:03:10.914381    4355 log.go:172] (0xc0001196b0) Data frame received for 5\nI0821 00:03:10.914390    4355 log.go:172] (0xc00026a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:03:10.914417    4355 log.go:172] (0xc0001196b0) Data frame received for 3\nI0821 00:03:10.914434    4355 log.go:172] (0xc00066bb80) (3) Data frame handling\nI0821 00:03:10.914450    4355 log.go:172] (0xc00066bb80) (3) Data frame sent\nI0821 00:03:10.914462    4355 log.go:172] (0xc0001196b0) Data frame received for 3\nI0821 00:03:10.914474    4355 log.go:172] (0xc00066bb80) (3) Data frame handling\nI0821 00:03:10.915830    4355 log.go:172] (0xc0001196b0) Data frame received for 1\nI0821 00:03:10.915857    4355 log.go:172] (0xc000a82000) (1) Data frame handling\nI0821 00:03:10.915872    4355 log.go:172] (0xc000a82000) (1) Data frame sent\nI0821 00:03:10.915883    4355 log.go:172] (0xc0001196b0) (0xc000a82000) Stream removed, broadcasting: 1\nI0821 00:03:10.915894    4355 log.go:172] (0xc0001196b0) Go away received\nI0821 00:03:10.916344    4355 log.go:172] (0xc0001196b0) (0xc000a82000) Stream removed, broadcasting: 1\nI0821 00:03:10.916374    4355 log.go:172] (0xc0001196b0) (0xc00066bb80) Stream removed, broadcasting: 3\nI0821 00:03:10.916397    4355 log.go:172] (0xc0001196b0) (0xc00026a000) Stream removed, broadcasting: 5\n"
Aug 21 00:03:10.926: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 00:03:10.926: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 00:03:10.930: INFO: Found 1 stateful pods, waiting for 3
Aug 21 00:03:20.934: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 00:03:20.934: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 00:03:20.934: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 21 00:03:20.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 00:03:21.162: INFO: stderr: "I0821 00:03:21.089375    4375 log.go:172] (0xc000114370) (0xc000659b80) Create stream\nI0821 00:03:21.089474    4375 log.go:172] (0xc000114370) (0xc000659b80) Stream added, broadcasting: 1\nI0821 00:03:21.092070    4375 log.go:172] (0xc000114370) Reply frame received for 1\nI0821 00:03:21.092095    4375 log.go:172] (0xc000114370) (0xc000924000) Create stream\nI0821 00:03:21.092102    4375 log.go:172] (0xc000114370) (0xc000924000) Stream added, broadcasting: 3\nI0821 00:03:21.093167    4375 log.go:172] (0xc000114370) Reply frame received for 3\nI0821 00:03:21.093201    4375 log.go:172] (0xc000114370) (0xc000273400) Create stream\nI0821 00:03:21.093209    4375 log.go:172] (0xc000114370) (0xc000273400) Stream added, broadcasting: 5\nI0821 00:03:21.094349    4375 log.go:172] (0xc000114370) Reply frame received for 5\nI0821 00:03:21.152949    4375 log.go:172] (0xc000114370) Data frame received for 3\nI0821 00:03:21.152994    4375 log.go:172] (0xc000924000) (3) Data frame handling\nI0821 00:03:21.153007    4375 log.go:172] (0xc000924000) (3) Data frame sent\nI0821 00:03:21.153028    4375 log.go:172] (0xc000114370) Data frame received for 5\nI0821 00:03:21.153035    4375 log.go:172] (0xc000273400) (5) Data frame handling\nI0821 00:03:21.153041    4375 log.go:172] (0xc000273400) (5) Data frame sent\nI0821 00:03:21.153046    4375 log.go:172] (0xc000114370) Data frame received for 5\nI0821 00:03:21.153051    4375 log.go:172] (0xc000273400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:03:21.153103    4375 log.go:172] (0xc000114370) Data frame received for 3\nI0821 00:03:21.153111    4375 log.go:172] (0xc000924000) (3) Data frame handling\nI0821 00:03:21.154539    4375 log.go:172] (0xc000114370) Data frame received for 1\nI0821 00:03:21.154569    4375 log.go:172] (0xc000659b80) (1) Data frame handling\nI0821 00:03:21.154588    4375 log.go:172] (0xc000659b80) (1) Data frame sent\nI0821 00:03:21.154606    4375 log.go:172] (0xc000114370) (0xc000659b80) Stream removed, broadcasting: 1\nI0821 00:03:21.154629    4375 log.go:172] (0xc000114370) Go away received\nI0821 00:03:21.154906    4375 log.go:172] (0xc000114370) (0xc000659b80) Stream removed, broadcasting: 1\nI0821 00:03:21.154917    4375 log.go:172] (0xc000114370) (0xc000924000) Stream removed, broadcasting: 3\nI0821 00:03:21.154922    4375 log.go:172] (0xc000114370) (0xc000273400) Stream removed, broadcasting: 5\n"
Aug 21 00:03:21.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 00:03:21.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 00:03:21.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 00:03:21.611: INFO: stderr: "I0821 00:03:21.469429    4395 log.go:172] (0xc000974d10) (0xc00098ac80) Create stream\nI0821 00:03:21.469502    4395 log.go:172] (0xc000974d10) (0xc00098ac80) Stream added, broadcasting: 1\nI0821 00:03:21.473796    4395 log.go:172] (0xc000974d10) Reply frame received for 1\nI0821 00:03:21.473829    4395 log.go:172] (0xc000974d10) (0xc0006ffb80) Create stream\nI0821 00:03:21.473838    4395 log.go:172] (0xc000974d10) (0xc0006ffb80) Stream added, broadcasting: 3\nI0821 00:03:21.474671    4395 log.go:172] (0xc000974d10) Reply frame received for 3\nI0821 00:03:21.474703    4395 log.go:172] (0xc000974d10) (0xc0006a8780) Create stream\nI0821 00:03:21.474714    4395 log.go:172] (0xc000974d10) (0xc0006a8780) Stream added, broadcasting: 5\nI0821 00:03:21.475722    4395 log.go:172] (0xc000974d10) Reply frame received for 5\nI0821 00:03:21.567961    4395 log.go:172] (0xc000974d10) Data frame received for 5\nI0821 00:03:21.567983    4395 log.go:172] (0xc0006a8780) (5) Data frame handling\nI0821 00:03:21.567997    4395 log.go:172] (0xc0006a8780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:03:21.602702    4395 log.go:172] (0xc000974d10) Data frame received for 3\nI0821 00:03:21.602760    4395 log.go:172] (0xc0006ffb80) (3) Data frame handling\nI0821 00:03:21.602798    4395 log.go:172] (0xc0006ffb80) (3) Data frame sent\nI0821 00:03:21.602817    4395 log.go:172] (0xc000974d10) Data frame received for 3\nI0821 00:03:21.602830    4395 log.go:172] (0xc0006ffb80) (3) Data frame handling\nI0821 00:03:21.602996    4395 log.go:172] (0xc000974d10) Data frame received for 5\nI0821 00:03:21.603023    4395 log.go:172] (0xc0006a8780) (5) Data frame handling\nI0821 00:03:21.604904    4395 log.go:172] (0xc000974d10) Data frame received for 1\nI0821 00:03:21.604936    4395 log.go:172] (0xc00098ac80) (1) Data frame handling\nI0821 00:03:21.604965    4395 log.go:172] (0xc00098ac80) (1) Data frame sent\nI0821 00:03:21.604996    4395 log.go:172] (0xc000974d10) (0xc00098ac80) Stream removed, broadcasting: 1\nI0821 00:03:21.605030    4395 log.go:172] (0xc000974d10) Go away received\nI0821 00:03:21.605350    4395 log.go:172] (0xc000974d10) (0xc00098ac80) Stream removed, broadcasting: 1\nI0821 00:03:21.605384    4395 log.go:172] (0xc000974d10) (0xc0006ffb80) Stream removed, broadcasting: 3\nI0821 00:03:21.605405    4395 log.go:172] (0xc000974d10) (0xc0006a8780) Stream removed, broadcasting: 5\n"
Aug 21 00:03:21.611: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 00:03:21.611: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 00:03:21.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 00:03:21.865: INFO: stderr: "I0821 00:03:21.748283    4414 log.go:172] (0xc000b0a6e0) (0xc000a70000) Create stream\nI0821 00:03:21.748354    4414 log.go:172] (0xc000b0a6e0) (0xc000a70000) Stream added, broadcasting: 1\nI0821 00:03:21.751645    4414 log.go:172] (0xc000b0a6e0) Reply frame received for 1\nI0821 00:03:21.751696    4414 log.go:172] (0xc000b0a6e0) (0xc000a6e000) Create stream\nI0821 00:03:21.751708    4414 log.go:172] (0xc000b0a6e0) (0xc000a6e000) Stream added, broadcasting: 3\nI0821 00:03:21.752915    4414 log.go:172] (0xc000b0a6e0) Reply frame received for 3\nI0821 00:03:21.752941    4414 log.go:172] (0xc000b0a6e0) (0xc000a700a0) Create stream\nI0821 00:03:21.752951    4414 log.go:172] (0xc000b0a6e0) (0xc000a700a0) Stream added, broadcasting: 5\nI0821 00:03:21.755240    4414 log.go:172] (0xc000b0a6e0) Reply frame received for 5\nI0821 00:03:21.818801    4414 log.go:172] (0xc000b0a6e0) Data frame received for 5\nI0821 00:03:21.818820    4414 log.go:172] (0xc000a700a0) (5) Data frame handling\nI0821 00:03:21.818830    4414 log.go:172] (0xc000a700a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 00:03:21.856944    4414 log.go:172] (0xc000b0a6e0) Data frame received for 3\nI0821 00:03:21.856979    4414 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0821 00:03:21.856997    4414 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0821 00:03:21.857152    4414 log.go:172] (0xc000b0a6e0) Data frame received for 3\nI0821 00:03:21.857198    4414 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0821 00:03:21.857254    4414 log.go:172] (0xc000b0a6e0) Data frame received for 5\nI0821 00:03:21.857304    4414 log.go:172] (0xc000a700a0) (5) Data frame handling\nI0821 00:03:21.859129    4414 log.go:172] (0xc000b0a6e0) Data frame received for 1\nI0821 00:03:21.859166    4414 log.go:172] (0xc000a70000) (1) Data frame handling\nI0821 00:03:21.859186    4414 log.go:172] (0xc000a70000) (1) Data frame sent\nI0821 00:03:21.859207    4414 log.go:172] (0xc000b0a6e0) (0xc000a70000) Stream removed, broadcasting: 1\nI0821 00:03:21.859231    4414 log.go:172] (0xc000b0a6e0) Go away received\nI0821 00:03:21.859707    4414 log.go:172] (0xc000b0a6e0) (0xc000a70000) Stream removed, broadcasting: 1\nI0821 00:03:21.859742    4414 log.go:172] (0xc000b0a6e0) (0xc000a6e000) Stream removed, broadcasting: 3\nI0821 00:03:21.859763    4414 log.go:172] (0xc000b0a6e0) (0xc000a700a0) Stream removed, broadcasting: 5\n"
Aug 21 00:03:21.866: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 00:03:21.866: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 00:03:21.866: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 00:03:21.868: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 21 00:03:31.893: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 00:03:31.893: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 00:03:31.893: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 00:03:31.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999432s
Aug 21 00:03:32.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971167909s
Aug 21 00:03:33.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96033373s
Aug 21 00:03:34.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.956131994s
Aug 21 00:03:35.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.951469866s
Aug 21 00:03:36.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947177389s
Aug 21 00:03:37.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.937637331s
Aug 21 00:03:39.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.932827341s
Aug 21 00:03:40.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.86791087s
Aug 21 00:03:41.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 860.919376ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7862
Aug 21 00:03:42.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 00:03:42.239: INFO: stderr: "I0821 00:03:42.168643    4433 log.go:172] (0xc0000e4370) (0xc0002b94a0) Create stream\nI0821 00:03:42.168688    4433 log.go:172] (0xc0000e4370) (0xc0002b94a0) Stream added, broadcasting: 1\nI0821 00:03:42.171770    4433 log.go:172] (0xc0000e4370) Reply frame received for 1\nI0821 00:03:42.171815    4433 log.go:172] (0xc0000e4370) (0xc0009540a0) Create stream\nI0821 00:03:42.171831    4433 log.go:172] (0xc0000e4370) (0xc0009540a0) Stream added, broadcasting: 3\nI0821 00:03:42.174090    4433 log.go:172] (0xc0000e4370) Reply frame received for 3\nI0821 00:03:42.174165    4433 log.go:172] (0xc0000e4370) (0xc0006f9a40) Create stream\nI0821 00:03:42.174190    4433 log.go:172] (0xc0000e4370) (0xc0006f9a40) Stream added, broadcasting: 5\nI0821 00:03:42.177110    4433 log.go:172] (0xc0000e4370) Reply frame received for 5\nI0821 00:03:42.230210    4433 log.go:172] (0xc0000e4370) Data frame received for 5\nI0821 00:03:42.230251    4433 log.go:172] (0xc0006f9a40) (5) Data frame handling\nI0821 00:03:42.230262    4433 log.go:172] (0xc0006f9a40) (5) Data frame sent\nI0821 00:03:42.230276    4433 log.go:172] (0xc0000e4370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:03:42.230292    4433 log.go:172] (0xc0006f9a40) (5) Data frame handling\nI0821 00:03:42.230335    4433 log.go:172] (0xc0000e4370) Data frame received for 3\nI0821 00:03:42.230361    4433 log.go:172] (0xc0009540a0) (3) Data frame handling\nI0821 00:03:42.230376    4433 log.go:172] (0xc0009540a0) (3) Data frame sent\nI0821 00:03:42.230391    4433 log.go:172] (0xc0000e4370) Data frame received for 3\nI0821 00:03:42.230404    4433 log.go:172] (0xc0009540a0) (3) Data frame handling\nI0821 00:03:42.231526    4433 log.go:172] (0xc0000e4370) Data frame received for 1\nI0821 00:03:42.231546    4433 log.go:172] (0xc0002b94a0) (1) Data frame handling\nI0821 00:03:42.231564    4433 log.go:172] (0xc0002b94a0) (1) Data frame sent\nI0821 00:03:42.231578    4433 log.go:172] (0xc0000e4370) (0xc0002b94a0) Stream removed, broadcasting: 1\nI0821 00:03:42.231600    4433 log.go:172] (0xc0000e4370) Go away received\nI0821 00:03:42.231921    4433 log.go:172] (0xc0000e4370) (0xc0002b94a0) Stream removed, broadcasting: 1\nI0821 00:03:42.231943    4433 log.go:172] (0xc0000e4370) (0xc0009540a0) Stream removed, broadcasting: 3\nI0821 00:03:42.231954    4433 log.go:172] (0xc0000e4370) (0xc0006f9a40) Stream removed, broadcasting: 5\n"
Aug 21 00:03:42.239: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 00:03:42.239: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 00:03:42.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 00:03:42.418: INFO: stderr: "I0821 00:03:42.359212    4453 log.go:172] (0xc0008e8000) (0xc00094c000) Create stream\nI0821 00:03:42.359254    4453 log.go:172] (0xc0008e8000) (0xc00094c000) Stream added, broadcasting: 1\nI0821 00:03:42.360818    4453 log.go:172] (0xc0008e8000) Reply frame received for 1\nI0821 00:03:42.360842    4453 log.go:172] (0xc0008e8000) (0xc00094c0a0) Create stream\nI0821 00:03:42.360849    4453 log.go:172] (0xc0008e8000) (0xc00094c0a0) Stream added, broadcasting: 3\nI0821 00:03:42.361502    4453 log.go:172] (0xc0008e8000) Reply frame received for 3\nI0821 00:03:42.361536    4453 log.go:172] (0xc0008e8000) (0xc000a7a000) Create stream\nI0821 00:03:42.361547    4453 log.go:172] (0xc0008e8000) (0xc000a7a000) Stream added, broadcasting: 5\nI0821 00:03:42.362280    4453 log.go:172] (0xc0008e8000) Reply frame received for 5\nI0821 00:03:42.409728    4453 log.go:172] (0xc0008e8000) Data frame received for 3\nI0821 00:03:42.409780    4453 log.go:172] (0xc00094c0a0) (3) Data frame handling\nI0821 00:03:42.409810    4453 log.go:172] (0xc00094c0a0) (3) Data frame sent\nI0821 00:03:42.409829    4453 log.go:172] (0xc0008e8000) Data frame received for 3\nI0821 00:03:42.409841    4453 log.go:172] (0xc00094c0a0) (3) Data frame handling\nI0821 00:03:42.409858    4453 log.go:172] (0xc0008e8000) Data frame received for 5\nI0821 00:03:42.409873    4453 log.go:172] (0xc000a7a000) (5) Data frame handling\nI0821 00:03:42.409896    4453 log.go:172] (0xc000a7a000) (5) Data frame sent\nI0821 00:03:42.409915    4453 log.go:172] (0xc0008e8000) Data frame received for 5\nI0821 00:03:42.409927    4453 log.go:172] (0xc000a7a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:03:42.411359    4453 log.go:172] (0xc0008e8000) Data frame received for 1\nI0821 00:03:42.411377    4453 log.go:172] (0xc00094c000) (1) Data frame handling\nI0821 00:03:42.411391    4453 log.go:172] (0xc00094c000) (1) Data frame sent\nI0821 00:03:42.411403    4453 log.go:172] (0xc0008e8000) (0xc00094c000) Stream removed, broadcasting: 1\nI0821 00:03:42.411544    4453 log.go:172] (0xc0008e8000) Go away received\nI0821 00:03:42.411663    4453 log.go:172] (0xc0008e8000) (0xc00094c000) Stream removed, broadcasting: 1\nI0821 00:03:42.411676    4453 log.go:172] (0xc0008e8000) (0xc00094c0a0) Stream removed, broadcasting: 3\nI0821 00:03:42.411682    4453 log.go:172] (0xc0008e8000) (0xc000a7a000) Stream removed, broadcasting: 5\n"
Aug 21 00:03:42.418: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 00:03:42.418: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 00:03:42.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7862 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 00:03:42.647: INFO: stderr: "I0821 00:03:42.534166    4473 log.go:172] (0xc000994000) (0xc0009600a0) Create stream\nI0821 00:03:42.534245    4473 log.go:172] (0xc000994000) (0xc0009600a0) Stream added, broadcasting: 1\nI0821 00:03:42.544977    4473 log.go:172] (0xc000994000) Reply frame received for 1\nI0821 00:03:42.545013    4473 log.go:172] (0xc000994000) (0xc000990000) Create stream\nI0821 00:03:42.545022    4473 log.go:172] (0xc000994000) (0xc000990000) Stream added, broadcasting: 3\nI0821 00:03:42.547203    4473 log.go:172] (0xc000994000) Reply frame received for 3\nI0821 00:03:42.547226    4473 log.go:172] (0xc000994000) (0xc0006d7ae0) Create stream\nI0821 00:03:42.547233    4473 log.go:172] (0xc000994000) (0xc0006d7ae0) Stream added, broadcasting: 5\nI0821 00:03:42.547846    4473 log.go:172] (0xc000994000) Reply frame received for 5\nI0821 00:03:42.635113    4473 log.go:172] (0xc000994000) Data frame received for 5\nI0821 00:03:42.635150    4473 log.go:172] (0xc0006d7ae0) (5) Data frame handling\nI0821 00:03:42.635175    4473 log.go:172] (0xc0006d7ae0) (5) Data frame sent\nI0821 00:03:42.635198    4473 log.go:172] (0xc000994000) Data frame received for 5\nI0821 00:03:42.635214    4473 log.go:172] (0xc0006d7ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 00:03:42.635232    4473 log.go:172] (0xc000994000) Data frame received for 3\nI0821 00:03:42.635242    4473 log.go:172] (0xc000990000) (3) Data frame handling\nI0821 00:03:42.635262    4473 log.go:172] (0xc000990000) (3) Data frame sent\nI0821 00:03:42.635283    4473 log.go:172] (0xc000994000) Data frame received for 3\nI0821 00:03:42.635297    4473 log.go:172] (0xc000990000) (3) Data frame handling\nI0821 00:03:42.636259    4473 log.go:172] (0xc000994000) Data frame received for 1\nI0821 00:03:42.636285    4473 log.go:172] (0xc0009600a0) (1) Data frame handling\nI0821 00:03:42.636316    4473 log.go:172] (0xc0009600a0) (1) Data frame sent\nI0821 00:03:42.636336    4473 log.go:172] (0xc000994000) (0xc0009600a0) Stream removed, broadcasting: 1\nI0821 00:03:42.636353    4473 log.go:172] (0xc000994000) Go away received\nI0821 00:03:42.636644    4473 log.go:172] (0xc000994000) (0xc0009600a0) Stream removed, broadcasting: 1\nI0821 00:03:42.636668    4473 log.go:172] (0xc000994000) (0xc000990000) Stream removed, broadcasting: 3\nI0821 00:03:42.636675    4473 log.go:172] (0xc000994000) (0xc0006d7ae0) Stream removed, broadcasting: 5\n"
Aug 21 00:03:42.647: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 00:03:42.647: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 00:03:42.647: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 00:04:03.082: INFO: Deleting all statefulset in ns statefulset-7862
Aug 21 00:04:03.086: INFO: Scaling statefulset ss to 0
Aug 21 00:04:03.094: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 00:04:03.096: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:03.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7862" for this suite.

• [SLOW TEST:84.510 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":256,"skipped":4294,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:03.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:04:03.311: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248" in namespace "projected-5345" to be "success or failure"
Aug 21 00:04:03.321: INFO: Pod "downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033371ms
Aug 21 00:04:05.357: INFO: Pod "downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046220463s
Aug 21 00:04:07.400: INFO: Pod "downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088552917s
STEP: Saw pod success
Aug 21 00:04:07.400: INFO: Pod "downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248" satisfied condition "success or failure"
Aug 21 00:04:07.403: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248 container client-container: 
STEP: delete the pod
Aug 21 00:04:07.445: INFO: Waiting for pod downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248 to disappear
Aug 21 00:04:07.471: INFO: Pod downwardapi-volume-4623ff39-d918-4480-9c96-68ae26344248 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5345" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4298,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:07.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:04:07.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730" in namespace "downward-api-6182" to be "success or failure"
Aug 21 00:04:07.832: INFO: Pod "downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730": Phase="Pending", Reason="", readiness=false. Elapsed: 11.709708ms
Aug 21 00:04:09.837: INFO: Pod "downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015950775s
Aug 21 00:04:11.839: INFO: Pod "downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018747282s
STEP: Saw pod success
Aug 21 00:04:11.839: INFO: Pod "downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730" satisfied condition "success or failure"
Aug 21 00:04:11.841: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730 container client-container: 
STEP: delete the pod
Aug 21 00:04:11.863: INFO: Waiting for pod downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730 to disappear
Aug 21 00:04:11.881: INFO: Pod downwardapi-volume-21af74c3-dd8f-4a60-9c20-154fe6217730 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6182" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4301,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:11.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 21 00:04:11.967: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 21 00:04:12.869: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 21 00:04:15.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565053, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 00:04:17.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565053, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 00:04:19.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565053, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733565052, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 00:04:21.918: INFO: Waited 621.690822ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:22.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-634" for this suite.

• [SLOW TEST:11.053 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":259,"skipped":4306,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:22.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 21 00:04:30.685: INFO: 10 pods remaining
Aug 21 00:04:30.685: INFO: 0 pods has nil DeletionTimestamp
Aug 21 00:04:30.685: INFO: 
Aug 21 00:04:31.801: INFO: 0 pods remaining
Aug 21 00:04:31.801: INFO: 0 pods has nil DeletionTimestamp
Aug 21 00:04:31.801: INFO: 
STEP: Gathering metrics
W0821 00:04:33.515778       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 00:04:33.515: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:33.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-356" for this suite.

• [SLOW TEST:10.843 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":260,"skipped":4338,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:33.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-5107571b-b638-4590-8075-ef53844f74b9 in namespace container-probe-8739
Aug 21 00:04:38.794: INFO: Started pod liveness-5107571b-b638-4590-8075-ef53844f74b9 in namespace container-probe-8739
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 00:04:38.796: INFO: Initial restart count of pod liveness-5107571b-b638-4590-8075-ef53844f74b9 is 0
Aug 21 00:04:54.829: INFO: Restart count of pod container-probe-8739/liveness-5107571b-b638-4590-8075-ef53844f74b9 is now 1 (16.03305634s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:04:54.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8739" for this suite.

• [SLOW TEST:21.063 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4350,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:04:54.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9417.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 61.41.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.41.61_udp@PTR;check="$$(dig +tcp +noall +answer +search 61.41.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.41.61_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9417.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9417.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9417.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9417.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 61.41.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.41.61_udp@PTR;check="$$(dig +tcp +noall +answer +search 61.41.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.41.61_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 00:05:03.281: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.288: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.292: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.311: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.314: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.317: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.320: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:03.336: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:08.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.347: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.371: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:08.440: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:13.341: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.348: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.351: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.369: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.374: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:13.392: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:18.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.348: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.351: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.369: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.378: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:18.394: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:23.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.346: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.348: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.367: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.370: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.374: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:23.388: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:28.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.382: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.736: INFO: Unable to read jessie_udp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.742: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.745: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local from pod dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22: the server could not find the requested resource (get pods dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22)
Aug 21 00:05:28.760: INFO: Lookups using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 failed for: [wheezy_udp@dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@dns-test-service.dns-9417.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_udp@dns-test-service.dns-9417.svc.cluster.local jessie_tcp@dns-test-service.dns-9417.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9417.svc.cluster.local]

Aug 21 00:05:33.410: INFO: DNS probes using dns-9417/dns-test-0e248a76-f262-47ff-ac44-c29a8d571a22 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:05:34.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9417" for this suite.

• [SLOW TEST:39.285 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":262,"skipped":4352,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:05:34.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 21 00:05:46.309: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.309: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.378117       6 log.go:172] (0xc002dee370) (0xc000eaf0e0) Create stream
I0821 00:05:46.378149       6 log.go:172] (0xc002dee370) (0xc000eaf0e0) Stream added, broadcasting: 1
I0821 00:05:46.379946       6 log.go:172] (0xc002dee370) Reply frame received for 1
I0821 00:05:46.379976       6 log.go:172] (0xc002dee370) (0xc002dbd360) Create stream
I0821 00:05:46.379986       6 log.go:172] (0xc002dee370) (0xc002dbd360) Stream added, broadcasting: 3
I0821 00:05:46.381209       6 log.go:172] (0xc002dee370) Reply frame received for 3
I0821 00:05:46.381254       6 log.go:172] (0xc002dee370) (0xc000eaf180) Create stream
I0821 00:05:46.381263       6 log.go:172] (0xc002dee370) (0xc000eaf180) Stream added, broadcasting: 5
I0821 00:05:46.382312       6 log.go:172] (0xc002dee370) Reply frame received for 5
I0821 00:05:46.439772       6 log.go:172] (0xc002dee370) Data frame received for 5
I0821 00:05:46.439841       6 log.go:172] (0xc000eaf180) (5) Data frame handling
I0821 00:05:46.439889       6 log.go:172] (0xc002dee370) Data frame received for 3
I0821 00:05:46.439910       6 log.go:172] (0xc002dbd360) (3) Data frame handling
I0821 00:05:46.439937       6 log.go:172] (0xc002dbd360) (3) Data frame sent
I0821 00:05:46.439960       6 log.go:172] (0xc002dee370) Data frame received for 3
I0821 00:05:46.439980       6 log.go:172] (0xc002dbd360) (3) Data frame handling
I0821 00:05:46.442040       6 log.go:172] (0xc002dee370) Data frame received for 1
I0821 00:05:46.442087       6 log.go:172] (0xc000eaf0e0) (1) Data frame handling
I0821 00:05:46.442107       6 log.go:172] (0xc000eaf0e0) (1) Data frame sent
I0821 00:05:46.442118       6 log.go:172] (0xc002dee370) (0xc000eaf0e0) Stream removed, broadcasting: 1
I0821 00:05:46.442127       6 log.go:172] (0xc002dee370) Go away received
I0821 00:05:46.442291       6 log.go:172] (0xc002dee370) (0xc000eaf0e0) Stream removed, broadcasting: 1
I0821 00:05:46.442325       6 log.go:172] (0xc002dee370) (0xc002dbd360) Stream removed, broadcasting: 3
I0821 00:05:46.442352       6 log.go:172] (0xc002dee370) (0xc000eaf180) Stream removed, broadcasting: 5
Aug 21 00:05:46.442: INFO: Exec stderr: ""
Aug 21 00:05:46.442: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.442: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.472926       6 log.go:172] (0xc002dee9a0) (0xc000eaf7c0) Create stream
I0821 00:05:46.473001       6 log.go:172] (0xc002dee9a0) (0xc000eaf7c0) Stream added, broadcasting: 1
I0821 00:05:46.474905       6 log.go:172] (0xc002dee9a0) Reply frame received for 1
I0821 00:05:46.474947       6 log.go:172] (0xc002dee9a0) (0xc000906140) Create stream
I0821 00:05:46.474962       6 log.go:172] (0xc002dee9a0) (0xc000906140) Stream added, broadcasting: 3
I0821 00:05:46.476041       6 log.go:172] (0xc002dee9a0) Reply frame received for 3
I0821 00:05:46.476097       6 log.go:172] (0xc002dee9a0) (0xc000906a00) Create stream
I0821 00:05:46.476114       6 log.go:172] (0xc002dee9a0) (0xc000906a00) Stream added, broadcasting: 5
I0821 00:05:46.477305       6 log.go:172] (0xc002dee9a0) Reply frame received for 5
I0821 00:05:46.551629       6 log.go:172] (0xc002dee9a0) Data frame received for 3
I0821 00:05:46.551660       6 log.go:172] (0xc000906140) (3) Data frame handling
I0821 00:05:46.551675       6 log.go:172] (0xc000906140) (3) Data frame sent
I0821 00:05:46.551683       6 log.go:172] (0xc002dee9a0) Data frame received for 3
I0821 00:05:46.551693       6 log.go:172] (0xc000906140) (3) Data frame handling
I0821 00:05:46.551847       6 log.go:172] (0xc002dee9a0) Data frame received for 5
I0821 00:05:46.551874       6 log.go:172] (0xc000906a00) (5) Data frame handling
I0821 00:05:46.553459       6 log.go:172] (0xc002dee9a0) Data frame received for 1
I0821 00:05:46.553503       6 log.go:172] (0xc000eaf7c0) (1) Data frame handling
I0821 00:05:46.553527       6 log.go:172] (0xc000eaf7c0) (1) Data frame sent
I0821 00:05:46.553553       6 log.go:172] (0xc002dee9a0) (0xc000eaf7c0) Stream removed, broadcasting: 1
I0821 00:05:46.553588       6 log.go:172] (0xc002dee9a0) Go away received
I0821 00:05:46.553657       6 log.go:172] (0xc002dee9a0) (0xc000eaf7c0) Stream removed, broadcasting: 1
I0821 00:05:46.553677       6 log.go:172] (0xc002dee9a0) (0xc000906140) Stream removed, broadcasting: 3
I0821 00:05:46.553689       6 log.go:172] (0xc002dee9a0) (0xc000906a00) Stream removed, broadcasting: 5
Aug 21 00:05:46.553: INFO: Exec stderr: ""
Aug 21 00:05:46.553: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.553: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.585712       6 log.go:172] (0xc0011024d0) (0xc001d04140) Create stream
I0821 00:05:46.585756       6 log.go:172] (0xc0011024d0) (0xc001d04140) Stream added, broadcasting: 1
I0821 00:05:46.588036       6 log.go:172] (0xc0011024d0) Reply frame received for 1
I0821 00:05:46.588083       6 log.go:172] (0xc0011024d0) (0xc000eafa40) Create stream
I0821 00:05:46.588099       6 log.go:172] (0xc0011024d0) (0xc000eafa40) Stream added, broadcasting: 3
I0821 00:05:46.589205       6 log.go:172] (0xc0011024d0) Reply frame received for 3
I0821 00:05:46.589253       6 log.go:172] (0xc0011024d0) (0xc000eafb80) Create stream
I0821 00:05:46.589268       6 log.go:172] (0xc0011024d0) (0xc000eafb80) Stream added, broadcasting: 5
I0821 00:05:46.590223       6 log.go:172] (0xc0011024d0) Reply frame received for 5
I0821 00:05:46.656156       6 log.go:172] (0xc0011024d0) Data frame received for 3
I0821 00:05:46.656189       6 log.go:172] (0xc000eafa40) (3) Data frame handling
I0821 00:05:46.656200       6 log.go:172] (0xc000eafa40) (3) Data frame sent
I0821 00:05:46.656210       6 log.go:172] (0xc0011024d0) Data frame received for 3
I0821 00:05:46.656225       6 log.go:172] (0xc000eafa40) (3) Data frame handling
I0821 00:05:46.656247       6 log.go:172] (0xc0011024d0) Data frame received for 5
I0821 00:05:46.656256       6 log.go:172] (0xc000eafb80) (5) Data frame handling
I0821 00:05:46.657994       6 log.go:172] (0xc0011024d0) Data frame received for 1
I0821 00:05:46.658031       6 log.go:172] (0xc001d04140) (1) Data frame handling
I0821 00:05:46.658056       6 log.go:172] (0xc001d04140) (1) Data frame sent
I0821 00:05:46.658082       6 log.go:172] (0xc0011024d0) (0xc001d04140) Stream removed, broadcasting: 1
I0821 00:05:46.658112       6 log.go:172] (0xc0011024d0) Go away received
I0821 00:05:46.658283       6 log.go:172] (0xc0011024d0) (0xc001d04140) Stream removed, broadcasting: 1
I0821 00:05:46.658315       6 log.go:172] (0xc0011024d0) (0xc000eafa40) Stream removed, broadcasting: 3
I0821 00:05:46.658337       6 log.go:172] (0xc0011024d0) (0xc000eafb80) Stream removed, broadcasting: 5
Aug 21 00:05:46.658: INFO: Exec stderr: ""
Aug 21 00:05:46.658: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.658: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.699078       6 log.go:172] (0xc002deefd0) (0xc0003a2e60) Create stream
I0821 00:05:46.699104       6 log.go:172] (0xc002deefd0) (0xc0003a2e60) Stream added, broadcasting: 1
I0821 00:05:46.701112       6 log.go:172] (0xc002deefd0) Reply frame received for 1
I0821 00:05:46.701169       6 log.go:172] (0xc002deefd0) (0xc001d041e0) Create stream
I0821 00:05:46.701195       6 log.go:172] (0xc002deefd0) (0xc001d041e0) Stream added, broadcasting: 3
I0821 00:05:46.702353       6 log.go:172] (0xc002deefd0) Reply frame received for 3
I0821 00:05:46.702385       6 log.go:172] (0xc002deefd0) (0xc000906c80) Create stream
I0821 00:05:46.702404       6 log.go:172] (0xc002deefd0) (0xc000906c80) Stream added, broadcasting: 5
I0821 00:05:46.703522       6 log.go:172] (0xc002deefd0) Reply frame received for 5
I0821 00:05:46.784683       6 log.go:172] (0xc002deefd0) Data frame received for 3
I0821 00:05:46.784711       6 log.go:172] (0xc001d041e0) (3) Data frame handling
I0821 00:05:46.784786       6 log.go:172] (0xc001d041e0) (3) Data frame sent
I0821 00:05:46.784806       6 log.go:172] (0xc002deefd0) Data frame received for 3
I0821 00:05:46.784816       6 log.go:172] (0xc001d041e0) (3) Data frame handling
I0821 00:05:46.784879       6 log.go:172] (0xc002deefd0) Data frame received for 5
I0821 00:05:46.784915       6 log.go:172] (0xc000906c80) (5) Data frame handling
I0821 00:05:46.786745       6 log.go:172] (0xc002deefd0) Data frame received for 1
I0821 00:05:46.786761       6 log.go:172] (0xc0003a2e60) (1) Data frame handling
I0821 00:05:46.786772       6 log.go:172] (0xc0003a2e60) (1) Data frame sent
I0821 00:05:46.786846       6 log.go:172] (0xc002deefd0) (0xc0003a2e60) Stream removed, broadcasting: 1
I0821 00:05:46.786923       6 log.go:172] (0xc002deefd0) (0xc0003a2e60) Stream removed, broadcasting: 1
I0821 00:05:46.786939       6 log.go:172] (0xc002deefd0) (0xc001d041e0) Stream removed, broadcasting: 3
I0821 00:05:46.787107       6 log.go:172] (0xc002deefd0) Go away received
I0821 00:05:46.787207       6 log.go:172] (0xc002deefd0) (0xc000906c80) Stream removed, broadcasting: 5
Aug 21 00:05:46.787: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 21 00:05:46.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.787: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.814055       6 log.go:172] (0xc002f63a20) (0xc000907a40) Create stream
I0821 00:05:46.814076       6 log.go:172] (0xc002f63a20) (0xc000907a40) Stream added, broadcasting: 1
I0821 00:05:46.815906       6 log.go:172] (0xc002f63a20) Reply frame received for 1
I0821 00:05:46.815958       6 log.go:172] (0xc002f63a20) (0xc001d04280) Create stream
I0821 00:05:46.815980       6 log.go:172] (0xc002f63a20) (0xc001d04280) Stream added, broadcasting: 3
I0821 00:05:46.817055       6 log.go:172] (0xc002f63a20) Reply frame received for 3
I0821 00:05:46.817134       6 log.go:172] (0xc002f63a20) (0xc001d04320) Create stream
I0821 00:05:46.817167       6 log.go:172] (0xc002f63a20) (0xc001d04320) Stream added, broadcasting: 5
I0821 00:05:46.818190       6 log.go:172] (0xc002f63a20) Reply frame received for 5
I0821 00:05:46.875878       6 log.go:172] (0xc002f63a20) Data frame received for 5
I0821 00:05:46.875930       6 log.go:172] (0xc001d04320) (5) Data frame handling
I0821 00:05:46.875959       6 log.go:172] (0xc002f63a20) Data frame received for 3
I0821 00:05:46.875972       6 log.go:172] (0xc001d04280) (3) Data frame handling
I0821 00:05:46.875991       6 log.go:172] (0xc001d04280) (3) Data frame sent
I0821 00:05:46.876007       6 log.go:172] (0xc002f63a20) Data frame received for 3
I0821 00:05:46.876033       6 log.go:172] (0xc001d04280) (3) Data frame handling
I0821 00:05:46.877299       6 log.go:172] (0xc002f63a20) Data frame received for 1
I0821 00:05:46.877312       6 log.go:172] (0xc000907a40) (1) Data frame handling
I0821 00:05:46.877321       6 log.go:172] (0xc000907a40) (1) Data frame sent
I0821 00:05:46.877332       6 log.go:172] (0xc002f63a20) (0xc000907a40) Stream removed, broadcasting: 1
I0821 00:05:46.877340       6 log.go:172] (0xc002f63a20) Go away received
I0821 00:05:46.877474       6 log.go:172] (0xc002f63a20) (0xc000907a40) Stream removed, broadcasting: 1
I0821 00:05:46.877517       6 log.go:172] (0xc002f63a20) (0xc001d04280) Stream removed, broadcasting: 3
I0821 00:05:46.877548       6 log.go:172] (0xc002f63a20) (0xc001d04320) Stream removed, broadcasting: 5
Aug 21 00:05:46.877: INFO: Exec stderr: ""
Aug 21 00:05:46.877: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:46.877: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:46.930537       6 log.go:172] (0xc0016b4160) (0xc000b78280) Create stream
I0821 00:05:46.930578       6 log.go:172] (0xc0016b4160) (0xc000b78280) Stream added, broadcasting: 1
I0821 00:05:46.932815       6 log.go:172] (0xc0016b4160) Reply frame received for 1
I0821 00:05:46.932919       6 log.go:172] (0xc0016b4160) (0xc002dbd4a0) Create stream
I0821 00:05:46.932941       6 log.go:172] (0xc0016b4160) (0xc002dbd4a0) Stream added, broadcasting: 3
I0821 00:05:46.933976       6 log.go:172] (0xc0016b4160) Reply frame received for 3
I0821 00:05:46.934013       6 log.go:172] (0xc0016b4160) (0xc000e22460) Create stream
I0821 00:05:46.934028       6 log.go:172] (0xc0016b4160) (0xc000e22460) Stream added, broadcasting: 5
I0821 00:05:46.934815       6 log.go:172] (0xc0016b4160) Reply frame received for 5
I0821 00:05:47.002905       6 log.go:172] (0xc0016b4160) Data frame received for 5
I0821 00:05:47.002932       6 log.go:172] (0xc000e22460) (5) Data frame handling
I0821 00:05:47.002947       6 log.go:172] (0xc0016b4160) Data frame received for 3
I0821 00:05:47.002952       6 log.go:172] (0xc002dbd4a0) (3) Data frame handling
I0821 00:05:47.002963       6 log.go:172] (0xc002dbd4a0) (3) Data frame sent
I0821 00:05:47.002970       6 log.go:172] (0xc0016b4160) Data frame received for 3
I0821 00:05:47.002974       6 log.go:172] (0xc002dbd4a0) (3) Data frame handling
I0821 00:05:47.004223       6 log.go:172] (0xc0016b4160) Data frame received for 1
I0821 00:05:47.004262       6 log.go:172] (0xc000b78280) (1) Data frame handling
I0821 00:05:47.004282       6 log.go:172] (0xc000b78280) (1) Data frame sent
I0821 00:05:47.004300       6 log.go:172] (0xc0016b4160) (0xc000b78280) Stream removed, broadcasting: 1
I0821 00:05:47.004325       6 log.go:172] (0xc0016b4160) Go away received
I0821 00:05:47.004414       6 log.go:172] (0xc0016b4160) (0xc000b78280) Stream removed, broadcasting: 1
I0821 00:05:47.004428       6 log.go:172] (0xc0016b4160) (0xc002dbd4a0) Stream removed, broadcasting: 3
I0821 00:05:47.004436       6 log.go:172] (0xc0016b4160) (0xc000e22460) Stream removed, broadcasting: 5
Aug 21 00:05:47.004: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 21 00:05:47.004: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:47.004: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:47.026988       6 log.go:172] (0xc0016b49a0) (0xc000b788c0) Create stream
I0821 00:05:47.027010       6 log.go:172] (0xc0016b49a0) (0xc000b788c0) Stream added, broadcasting: 1
I0821 00:05:47.028671       6 log.go:172] (0xc0016b49a0) Reply frame received for 1
I0821 00:05:47.028702       6 log.go:172] (0xc0016b49a0) (0xc0003a3040) Create stream
I0821 00:05:47.028714       6 log.go:172] (0xc0016b49a0) (0xc0003a3040) Stream added, broadcasting: 3
I0821 00:05:47.029660       6 log.go:172] (0xc0016b49a0) Reply frame received for 3
I0821 00:05:47.029712       6 log.go:172] (0xc0016b49a0) (0xc0013a4000) Create stream
I0821 00:05:47.029732       6 log.go:172] (0xc0016b49a0) (0xc0013a4000) Stream added, broadcasting: 5
I0821 00:05:47.030600       6 log.go:172] (0xc0016b49a0) Reply frame received for 5
I0821 00:05:47.095588       6 log.go:172] (0xc0016b49a0) Data frame received for 5
I0821 00:05:47.095627       6 log.go:172] (0xc0013a4000) (5) Data frame handling
I0821 00:05:47.095652       6 log.go:172] (0xc0016b49a0) Data frame received for 3
I0821 00:05:47.095663       6 log.go:172] (0xc0003a3040) (3) Data frame handling
I0821 00:05:47.095677       6 log.go:172] (0xc0003a3040) (3) Data frame sent
I0821 00:05:47.095688       6 log.go:172] (0xc0016b49a0) Data frame received for 3
I0821 00:05:47.095699       6 log.go:172] (0xc0003a3040) (3) Data frame handling
I0821 00:05:47.097285       6 log.go:172] (0xc0016b49a0) Data frame received for 1
I0821 00:05:47.097304       6 log.go:172] (0xc000b788c0) (1) Data frame handling
I0821 00:05:47.097315       6 log.go:172] (0xc000b788c0) (1) Data frame sent
I0821 00:05:47.097338       6 log.go:172] (0xc0016b49a0) (0xc000b788c0) Stream removed, broadcasting: 1
I0821 00:05:47.097420       6 log.go:172] (0xc0016b49a0) Go away received
I0821 00:05:47.097477       6 log.go:172] (0xc0016b49a0) (0xc000b788c0) Stream removed, broadcasting: 1
I0821 00:05:47.097506       6 log.go:172] (0xc0016b49a0) (0xc0003a3040) Stream removed, broadcasting: 3
I0821 00:05:47.097522       6 log.go:172] (0xc0016b49a0) (0xc0013a4000) Stream removed, broadcasting: 5
Aug 21 00:05:47.097: INFO: Exec stderr: ""
Aug 21 00:05:47.097: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:47.097: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:47.130111       6 log.go:172] (0xc002f1e6e0) (0xc002dbd9a0) Create stream
I0821 00:05:47.130142       6 log.go:172] (0xc002f1e6e0) (0xc002dbd9a0) Stream added, broadcasting: 1
I0821 00:05:47.131913       6 log.go:172] (0xc002f1e6e0) Reply frame received for 1
I0821 00:05:47.131960       6 log.go:172] (0xc002f1e6e0) (0xc0013a40a0) Create stream
I0821 00:05:47.131978       6 log.go:172] (0xc002f1e6e0) (0xc0013a40a0) Stream added, broadcasting: 3
I0821 00:05:47.132982       6 log.go:172] (0xc002f1e6e0) Reply frame received for 3
I0821 00:05:47.133025       6 log.go:172] (0xc002f1e6e0) (0xc0013a41e0) Create stream
I0821 00:05:47.133044       6 log.go:172] (0xc002f1e6e0) (0xc0013a41e0) Stream added, broadcasting: 5
I0821 00:05:47.133956       6 log.go:172] (0xc002f1e6e0) Reply frame received for 5
I0821 00:05:47.195855       6 log.go:172] (0xc002f1e6e0) Data frame received for 5
I0821 00:05:47.195889       6 log.go:172] (0xc0013a41e0) (5) Data frame handling
I0821 00:05:47.195910       6 log.go:172] (0xc002f1e6e0) Data frame received for 3
I0821 00:05:47.195928       6 log.go:172] (0xc0013a40a0) (3) Data frame handling
I0821 00:05:47.195937       6 log.go:172] (0xc0013a40a0) (3) Data frame sent
I0821 00:05:47.195944       6 log.go:172] (0xc002f1e6e0) Data frame received for 3
I0821 00:05:47.195963       6 log.go:172] (0xc0013a40a0) (3) Data frame handling
I0821 00:05:47.197731       6 log.go:172] (0xc002f1e6e0) Data frame received for 1
I0821 00:05:47.197763       6 log.go:172] (0xc002dbd9a0) (1) Data frame handling
I0821 00:05:47.197785       6 log.go:172] (0xc002dbd9a0) (1) Data frame sent
I0821 00:05:47.197808       6 log.go:172] (0xc002f1e6e0) (0xc002dbd9a0) Stream removed, broadcasting: 1
I0821 00:05:47.197910       6 log.go:172] (0xc002f1e6e0) Go away received
I0821 00:05:47.197958       6 log.go:172] (0xc002f1e6e0) (0xc002dbd9a0) Stream removed, broadcasting: 1
I0821 00:05:47.197980       6 log.go:172] (0xc002f1e6e0) (0xc0013a40a0) Stream removed, broadcasting: 3
I0821 00:05:47.197988       6 log.go:172] (0xc002f1e6e0) (0xc0013a41e0) Stream removed, broadcasting: 5
Aug 21 00:05:47.197: INFO: Exec stderr: ""
Aug 21 00:05:47.198: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:47.198: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:47.225888       6 log.go:172] (0xc0016b5080) (0xc000b78f00) Create stream
I0821 00:05:47.225911       6 log.go:172] (0xc0016b5080) (0xc000b78f00) Stream added, broadcasting: 1
I0821 00:05:47.228216       6 log.go:172] (0xc0016b5080) Reply frame received for 1
I0821 00:05:47.228282       6 log.go:172] (0xc0016b5080) (0xc000b79040) Create stream
I0821 00:05:47.228298       6 log.go:172] (0xc0016b5080) (0xc000b79040) Stream added, broadcasting: 3
I0821 00:05:47.229417       6 log.go:172] (0xc0016b5080) Reply frame received for 3
I0821 00:05:47.229450       6 log.go:172] (0xc0016b5080) (0xc002dbdb80) Create stream
I0821 00:05:47.229459       6 log.go:172] (0xc0016b5080) (0xc002dbdb80) Stream added, broadcasting: 5
I0821 00:05:47.230563       6 log.go:172] (0xc0016b5080) Reply frame received for 5
I0821 00:05:47.285807       6 log.go:172] (0xc0016b5080) Data frame received for 5
I0821 00:05:47.285833       6 log.go:172] (0xc002dbdb80) (5) Data frame handling
I0821 00:05:47.286109       6 log.go:172] (0xc0016b5080) Data frame received for 3
I0821 00:05:47.286127       6 log.go:172] (0xc000b79040) (3) Data frame handling
I0821 00:05:47.286143       6 log.go:172] (0xc000b79040) (3) Data frame sent
I0821 00:05:47.286157       6 log.go:172] (0xc0016b5080) Data frame received for 3
I0821 00:05:47.286164       6 log.go:172] (0xc000b79040) (3) Data frame handling
I0821 00:05:47.287911       6 log.go:172] (0xc0016b5080) Data frame received for 1
I0821 00:05:47.287932       6 log.go:172] (0xc000b78f00) (1) Data frame handling
I0821 00:05:47.287945       6 log.go:172] (0xc000b78f00) (1) Data frame sent
I0821 00:05:47.287961       6 log.go:172] (0xc0016b5080) (0xc000b78f00) Stream removed, broadcasting: 1
I0821 00:05:47.287986       6 log.go:172] (0xc0016b5080) Go away received
I0821 00:05:47.288060       6 log.go:172] (0xc0016b5080) (0xc000b78f00) Stream removed, broadcasting: 1
I0821 00:05:47.288078       6 log.go:172] (0xc0016b5080) (0xc000b79040) Stream removed, broadcasting: 3
I0821 00:05:47.288092       6 log.go:172] (0xc0016b5080) (0xc002dbdb80) Stream removed, broadcasting: 5
Aug 21 00:05:47.288: INFO: Exec stderr: ""
Aug 21 00:05:47.288: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2435 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:05:47.288: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:05:47.313435       6 log.go:172] (0xc002f1ee70) (0xc002dbdd60) Create stream
I0821 00:05:47.313458       6 log.go:172] (0xc002f1ee70) (0xc002dbdd60) Stream added, broadcasting: 1
I0821 00:05:47.314865       6 log.go:172] (0xc002f1ee70) Reply frame received for 1
I0821 00:05:47.314894       6 log.go:172] (0xc002f1ee70) (0xc000b792c0) Create stream
I0821 00:05:47.314905       6 log.go:172] (0xc002f1ee70) (0xc000b792c0) Stream added, broadcasting: 3
I0821 00:05:47.315654       6 log.go:172] (0xc002f1ee70) Reply frame received for 3
I0821 00:05:47.315681       6 log.go:172] (0xc002f1ee70) (0xc0013a43c0) Create stream
I0821 00:05:47.315688       6 log.go:172] (0xc002f1ee70) (0xc0013a43c0) Stream added, broadcasting: 5
I0821 00:05:47.316448       6 log.go:172] (0xc002f1ee70) Reply frame received for 5
I0821 00:05:47.372685       6 log.go:172] (0xc002f1ee70) Data frame received for 5
I0821 00:05:47.372857       6 log.go:172] (0xc0013a43c0) (5) Data frame handling
I0821 00:05:47.372907       6 log.go:172] (0xc002f1ee70) Data frame received for 3
I0821 00:05:47.372941       6 log.go:172] (0xc000b792c0) (3) Data frame handling
I0821 00:05:47.372961       6 log.go:172] (0xc000b792c0) (3) Data frame sent
I0821 00:05:47.372971       6 log.go:172] (0xc002f1ee70) Data frame received for 3
I0821 00:05:47.372980       6 log.go:172] (0xc000b792c0) (3) Data frame handling
I0821 00:05:47.374271       6 log.go:172] (0xc002f1ee70) Data frame received for 1
I0821 00:05:47.374288       6 log.go:172] (0xc002dbdd60) (1) Data frame handling
I0821 00:05:47.374299       6 log.go:172] (0xc002dbdd60) (1) Data frame sent
I0821 00:05:47.374404       6 log.go:172] (0xc002f1ee70) (0xc002dbdd60) Stream removed, broadcasting: 1
I0821 00:05:47.374458       6 log.go:172] (0xc002f1ee70) Go away received
I0821 00:05:47.374583       6 log.go:172] (0xc002f1ee70) (0xc002dbdd60) Stream removed, broadcasting: 1
I0821 00:05:47.374617       6 log.go:172] (0xc002f1ee70) (0xc000b792c0) Stream removed, broadcasting: 3
I0821 00:05:47.374638       6 log.go:172] (0xc002f1ee70) (0xc0013a43c0) Stream removed, broadcasting: 5
Aug 21 00:05:47.374: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:05:47.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2435" for this suite.

• [SLOW TEST:13.248 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4379,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:05:47.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 00:05:55.561: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 00:05:55.564: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 00:05:57.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 00:05:57.569: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 00:05:59.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 00:05:59.569: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 00:06:01.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 00:06:01.568: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 00:06:03.564: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 00:06:03.568: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:03.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6952" for this suite.

• [SLOW TEST:16.193 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4384,"failed":0}
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:03.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-96a7355c-3398-41bd-a9d2-fc7e8f2a6cd5
STEP: Creating a pod to test consume configMaps
Aug 21 00:06:04.236: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab" in namespace "projected-4454" to be "success or failure"
Aug 21 00:06:04.389: INFO: Pod "pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab": Phase="Pending", Reason="", readiness=false. Elapsed: 153.427655ms
Aug 21 00:06:06.394: INFO: Pod "pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157720296s
Aug 21 00:06:08.397: INFO: Pod "pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161269351s
STEP: Saw pod success
Aug 21 00:06:08.397: INFO: Pod "pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab" satisfied condition "success or failure"
Aug 21 00:06:08.399: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 00:06:08.615: INFO: Waiting for pod pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab to disappear
Aug 21 00:06:08.625: INFO: Pod pod-projected-configmaps-1d0b80fe-cbc0-4c1f-a340-af75934b36ab no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:08.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4454" for this suite.

• [SLOW TEST:5.112 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4384,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:08.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc
Aug 21 00:06:08.898: INFO: Pod name my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc: Found 0 pods out of 1
Aug 21 00:06:13.908: INFO: Pod name my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc: Found 1 pods out of 1
Aug 21 00:06:13.908: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc" are running
Aug 21 00:06:13.914: INFO: Pod "my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc-9s4cw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:06:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:06:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:06:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 00:06:08 +0000 UTC Reason: Message:}])
Aug 21 00:06:13.914: INFO: Trying to dial the pod
Aug 21 00:06:18.926: INFO: Controller my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc: Got expected result from replica 1 [my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc-9s4cw]: "my-hostname-basic-afa3beda-5b40-4bb8-9ab5-a278682fddbc-9s4cw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4708" for this suite.

• [SLOW TEST:10.248 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":266,"skipped":4385,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:18.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:06:19.059: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f" in namespace "security-context-test-9615" to be "success or failure"
Aug 21 00:06:19.081: INFO: Pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.157031ms
Aug 21 00:06:21.085: INFO: Pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025297025s
Aug 21 00:06:23.089: INFO: Pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029167212s
Aug 21 00:06:23.089: INFO: Pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f" satisfied condition "success or failure"
Aug 21 00:06:23.118: INFO: Got logs for pod "busybox-privileged-false-16ec36e4-ba46-409b-9dba-a6e3add2a64f": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9615" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4405,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:23.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:06:27.548: INFO: Waiting up to 5m0s for pod "client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302" in namespace "pods-3555" to be "success or failure"
Aug 21 00:06:27.569: INFO: Pod "client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302": Phase="Pending", Reason="", readiness=false. Elapsed: 20.976139ms
Aug 21 00:06:29.573: INFO: Pod "client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024877886s
Aug 21 00:06:31.584: INFO: Pod "client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035285889s
STEP: Saw pod success
Aug 21 00:06:31.584: INFO: Pod "client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302" satisfied condition "success or failure"
Aug 21 00:06:31.586: INFO: Trying to get logs from node jerma-worker pod client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302 container env3cont: 
STEP: delete the pod
Aug 21 00:06:31.608: INFO: Waiting for pod client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302 to disappear
Aug 21 00:06:31.620: INFO: Pod client-envvars-655bf4d1-cc85-475f-9947-81ab2e0bb302 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:31.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3555" for this suite.

• [SLOW TEST:8.505 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4418,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:31.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-2001
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-2001
STEP: creating replication controller externalsvc in namespace services-2001
I0821 00:06:31.857458       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2001, replica count: 2
I0821 00:06:34.907967       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 00:06:37.910773       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 21 00:06:37.987: INFO: Creating new exec pod
Aug 21 00:06:42.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2001 execpod52htj -- /bin/sh -x -c nslookup nodeport-service'
Aug 21 00:06:42.195: INFO: stderr: "I0821 00:06:42.119836    4494 log.go:172] (0xc0003c09a0) (0xc0007d6140) Create stream\nI0821 00:06:42.119914    4494 log.go:172] (0xc0003c09a0) (0xc0007d6140) Stream added, broadcasting: 1\nI0821 00:06:42.122704    4494 log.go:172] (0xc0003c09a0) Reply frame received for 1\nI0821 00:06:42.122754    4494 log.go:172] (0xc0003c09a0) (0xc0005ce820) Create stream\nI0821 00:06:42.122781    4494 log.go:172] (0xc0003c09a0) (0xc0005ce820) Stream added, broadcasting: 3\nI0821 00:06:42.123960    4494 log.go:172] (0xc0003c09a0) Reply frame received for 3\nI0821 00:06:42.123979    4494 log.go:172] (0xc0003c09a0) (0xc0007d61e0) Create stream\nI0821 00:06:42.123985    4494 log.go:172] (0xc0003c09a0) (0xc0007d61e0) Stream added, broadcasting: 5\nI0821 00:06:42.125155    4494 log.go:172] (0xc0003c09a0) Reply frame received for 5\nI0821 00:06:42.177091    4494 log.go:172] (0xc0003c09a0) Data frame received for 5\nI0821 00:06:42.177130    4494 log.go:172] (0xc0007d61e0) (5) Data frame handling\nI0821 00:06:42.177158    4494 log.go:172] (0xc0007d61e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0821 00:06:42.185114    4494 log.go:172] (0xc0003c09a0) Data frame received for 3\nI0821 00:06:42.185144    4494 log.go:172] (0xc0005ce820) (3) Data frame handling\nI0821 00:06:42.185161    4494 log.go:172] (0xc0005ce820) (3) Data frame sent\nI0821 00:06:42.186072    4494 log.go:172] (0xc0003c09a0) Data frame received for 3\nI0821 00:06:42.186093    4494 log.go:172] (0xc0005ce820) (3) Data frame handling\nI0821 00:06:42.186116    4494 log.go:172] (0xc0005ce820) (3) Data frame sent\nI0821 00:06:42.186528    4494 log.go:172] (0xc0003c09a0) Data frame received for 5\nI0821 00:06:42.186555    4494 log.go:172] (0xc0007d61e0) (5) Data frame handling\nI0821 00:06:42.186582    4494 log.go:172] (0xc0003c09a0) Data frame received for 3\nI0821 00:06:42.186595    4494 log.go:172] (0xc0005ce820) (3) Data frame handling\nI0821 00:06:42.188571    4494 log.go:172] (0xc0003c09a0) Data frame received for 1\nI0821 00:06:42.188593    4494 log.go:172] (0xc0007d6140) (1) Data frame handling\nI0821 00:06:42.188616    4494 log.go:172] (0xc0007d6140) (1) Data frame sent\nI0821 00:06:42.188626    4494 log.go:172] (0xc0003c09a0) (0xc0007d6140) Stream removed, broadcasting: 1\nI0821 00:06:42.188906    4494 log.go:172] (0xc0003c09a0) Go away received\nI0821 00:06:42.189007    4494 log.go:172] (0xc0003c09a0) (0xc0007d6140) Stream removed, broadcasting: 1\nI0821 00:06:42.189023    4494 log.go:172] (0xc0003c09a0) (0xc0005ce820) Stream removed, broadcasting: 3\nI0821 00:06:42.189029    4494 log.go:172] (0xc0003c09a0) (0xc0007d61e0) Stream removed, broadcasting: 5\n"
Aug 21 00:06:42.195: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2001.svc.cluster.local\tcanonical name = externalsvc.services-2001.svc.cluster.local.\nName:\texternalsvc.services-2001.svc.cluster.local\nAddress: 10.111.38.167\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-2001, will wait for the garbage collector to delete the pods
Aug 21 00:06:42.259: INFO: Deleting ReplicationController externalsvc took: 10.491051ms
Aug 21 00:06:42.659: INFO: Terminating ReplicationController externalsvc pods took: 400.316983ms
Aug 21 00:06:51.843: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:51.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2001" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.244 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":269,"skipped":4467,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:51.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 00:06:51.951: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:52.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":270,"skipped":4482,"failed":0}
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:52.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3688/configmap-test-cf79d4c0-65bb-4318-8f43-a175de66ed4b
STEP: Creating a pod to test consume configMaps
Aug 21 00:06:53.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d" in namespace "configmap-3688" to be "success or failure"
Aug 21 00:06:53.088: INFO: Pod "pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.926072ms
Aug 21 00:06:55.187: INFO: Pod "pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106585653s
Aug 21 00:06:57.189: INFO: Pod "pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109180038s
STEP: Saw pod success
Aug 21 00:06:57.189: INFO: Pod "pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d" satisfied condition "success or failure"
Aug 21 00:06:57.192: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d container env-test: 
STEP: delete the pod
Aug 21 00:06:57.402: INFO: Waiting for pod pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d to disappear
Aug 21 00:06:57.419: INFO: Pod pod-configmaps-764897d5-6a3c-455c-abc9-49217b7c657d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:06:57.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3688" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4488,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:06:57.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-830
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 00:06:57.571: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 00:07:21.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.2.150&port=8081&tries=1'] Namespace:pod-network-test-830 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:07:21.925: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:07:21.967267       6 log.go:172] (0xc001103600) (0xc001dedd60) Create stream
I0821 00:07:21.967327       6 log.go:172] (0xc001103600) (0xc001dedd60) Stream added, broadcasting: 1
I0821 00:07:21.969633       6 log.go:172] (0xc001103600) Reply frame received for 1
I0821 00:07:21.969703       6 log.go:172] (0xc001103600) (0xc001dede00) Create stream
I0821 00:07:21.969737       6 log.go:172] (0xc001103600) (0xc001dede00) Stream added, broadcasting: 3
I0821 00:07:21.970800       6 log.go:172] (0xc001103600) Reply frame received for 3
I0821 00:07:21.970991       6 log.go:172] (0xc001103600) (0xc0025b30e0) Create stream
I0821 00:07:21.971034       6 log.go:172] (0xc001103600) (0xc0025b30e0) Stream added, broadcasting: 5
I0821 00:07:21.972275       6 log.go:172] (0xc001103600) Reply frame received for 5
I0821 00:07:22.079469       6 log.go:172] (0xc001103600) Data frame received for 3
I0821 00:07:22.079515       6 log.go:172] (0xc001dede00) (3) Data frame handling
I0821 00:07:22.079534       6 log.go:172] (0xc001dede00) (3) Data frame sent
I0821 00:07:22.080513       6 log.go:172] (0xc001103600) Data frame received for 5
I0821 00:07:22.080557       6 log.go:172] (0xc0025b30e0) (5) Data frame handling
I0821 00:07:22.080706       6 log.go:172] (0xc001103600) Data frame received for 3
I0821 00:07:22.080716       6 log.go:172] (0xc001dede00) (3) Data frame handling
I0821 00:07:22.083415       6 log.go:172] (0xc001103600) Data frame received for 1
I0821 00:07:22.083454       6 log.go:172] (0xc001dedd60) (1) Data frame handling
I0821 00:07:22.083480       6 log.go:172] (0xc001dedd60) (1) Data frame sent
I0821 00:07:22.083515       6 log.go:172] (0xc001103600) (0xc001dedd60) Stream removed, broadcasting: 1
I0821 00:07:22.083559       6 log.go:172] (0xc001103600) Go away received
I0821 00:07:22.083642       6 log.go:172] (0xc001103600) (0xc001dedd60) Stream removed, broadcasting: 1
I0821 00:07:22.083675       6 log.go:172] (0xc001103600) (0xc001dede00) Stream removed, broadcasting: 3
I0821 00:07:22.083701       6 log.go:172] (0xc001103600) (0xc0025b30e0) Stream removed, broadcasting: 5
Aug 21 00:07:22.083: INFO: Waiting for responses: map[]
Aug 21 00:07:22.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.1.157&port=8081&tries=1'] Namespace:pod-network-test-830 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 00:07:22.102: INFO: >>> kubeConfig: /root/.kube/config
I0821 00:07:22.134388       6 log.go:172] (0xc002e1a160) (0xc00182bc20) Create stream
I0821 00:07:22.134419       6 log.go:172] (0xc002e1a160) (0xc00182bc20) Stream added, broadcasting: 1
I0821 00:07:22.136386       6 log.go:172] (0xc002e1a160) Reply frame received for 1
I0821 00:07:22.136430       6 log.go:172] (0xc002e1a160) (0xc00253a280) Create stream
I0821 00:07:22.136446       6 log.go:172] (0xc002e1a160) (0xc00253a280) Stream added, broadcasting: 3
I0821 00:07:22.137784       6 log.go:172] (0xc002e1a160) Reply frame received for 3
I0821 00:07:22.137839       6 log.go:172] (0xc002e1a160) (0xc00253a320) Create stream
I0821 00:07:22.137855       6 log.go:172] (0xc002e1a160) (0xc00253a320) Stream added, broadcasting: 5
I0821 00:07:22.139047       6 log.go:172] (0xc002e1a160) Reply frame received for 5
I0821 00:07:22.226402       6 log.go:172] (0xc002e1a160) Data frame received for 3
I0821 00:07:22.226427       6 log.go:172] (0xc00253a280) (3) Data frame handling
I0821 00:07:22.226443       6 log.go:172] (0xc00253a280) (3) Data frame sent
I0821 00:07:22.226871       6 log.go:172] (0xc002e1a160) Data frame received for 3
I0821 00:07:22.226901       6 log.go:172] (0xc00253a280) (3) Data frame handling
I0821 00:07:22.226933       6 log.go:172] (0xc002e1a160) Data frame received for 5
I0821 00:07:22.226961       6 log.go:172] (0xc00253a320) (5) Data frame handling
I0821 00:07:22.228570       6 log.go:172] (0xc002e1a160) Data frame received for 1
I0821 00:07:22.228596       6 log.go:172] (0xc00182bc20) (1) Data frame handling
I0821 00:07:22.228630       6 log.go:172] (0xc00182bc20) (1) Data frame sent
I0821 00:07:22.228652       6 log.go:172] (0xc002e1a160) (0xc00182bc20) Stream removed, broadcasting: 1
I0821 00:07:22.228678       6 log.go:172] (0xc002e1a160) Go away received
I0821 00:07:22.228814       6 log.go:172] (0xc002e1a160) (0xc00182bc20) Stream removed, broadcasting: 1
I0821 00:07:22.228836       6 log.go:172] (0xc002e1a160) (0xc00253a280) Stream removed, broadcasting: 3
I0821 00:07:22.228849       6 log.go:172] (0xc002e1a160) (0xc00253a320) Stream removed, broadcasting: 5
Aug 21 00:07:22.228: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:07:22.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-830" for this suite.

• [SLOW TEST:24.760 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4516,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:07:22.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 00:07:22.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5177'
Aug 21 00:07:22.377: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 21 00:07:22.377: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Aug 21 00:07:22.427: INFO: scanned /root for discovery docs: 
Aug 21 00:07:22.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5177'
Aug 21 00:07:38.261: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 21 00:07:38.261: INFO: stdout: "Created e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16\nScaling up e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 21 00:07:38.261: INFO: stdout: "Created e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16\nScaling up e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 21 00:07:38.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5177'
Aug 21 00:07:38.387: INFO: stderr: ""
Aug 21 00:07:38.387: INFO: stdout: "e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16-pbngg e2e-test-httpd-rc-znxb4 "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Aug 21 00:07:43.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5177'
Aug 21 00:07:43.485: INFO: stderr: ""
Aug 21 00:07:43.485: INFO: stdout: "e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16-pbngg "
Aug 21 00:07:43.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16-pbngg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5177'
Aug 21 00:07:43.574: INFO: stderr: ""
Aug 21 00:07:43.574: INFO: stdout: "true"
Aug 21 00:07:43.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16-pbngg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5177'
Aug 21 00:07:43.667: INFO: stderr: ""
Aug 21 00:07:43.667: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 21 00:07:43.667: INFO: e2e-test-httpd-rc-cd10dabb655c8687d139ef2c81371e16-pbngg is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 21 00:07:43.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5177'
Aug 21 00:07:43.770: INFO: stderr: ""
Aug 21 00:07:43.770: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:07:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5177" for this suite.

• [SLOW TEST:21.550 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":273,"skipped":4521,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:07:43.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 00:07:43.872: INFO: Waiting up to 5m0s for pod "pod-47b80f92-3432-44c4-b83f-07be1f6a4922" in namespace "emptydir-3805" to be "success or failure"
Aug 21 00:07:43.889: INFO: Pod "pod-47b80f92-3432-44c4-b83f-07be1f6a4922": Phase="Pending", Reason="", readiness=false. Elapsed: 16.477197ms
Aug 21 00:07:45.892: INFO: Pod "pod-47b80f92-3432-44c4-b83f-07be1f6a4922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019723549s
Aug 21 00:07:47.896: INFO: Pod "pod-47b80f92-3432-44c4-b83f-07be1f6a4922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023721735s
STEP: Saw pod success
Aug 21 00:07:47.896: INFO: Pod "pod-47b80f92-3432-44c4-b83f-07be1f6a4922" satisfied condition "success or failure"
Aug 21 00:07:47.898: INFO: Trying to get logs from node jerma-worker2 pod pod-47b80f92-3432-44c4-b83f-07be1f6a4922 container test-container: 
STEP: delete the pod
Aug 21 00:07:47.963: INFO: Waiting for pod pod-47b80f92-3432-44c4-b83f-07be1f6a4922 to disappear
Aug 21 00:07:47.965: INFO: Pod pod-47b80f92-3432-44c4-b83f-07be1f6a4922 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:07:47.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3805" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4529,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:07:48.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:07:48.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a" in namespace "downward-api-5773" to be "success or failure"
Aug 21 00:07:48.071: INFO: Pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.269854ms
Aug 21 00:07:50.075: INFO: Pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023564637s
Aug 21 00:07:52.079: INFO: Pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02768787s
Aug 21 00:07:55.235: INFO: Pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.183961128s
STEP: Saw pod success
Aug 21 00:07:55.235: INFO: Pod "downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a" satisfied condition "success or failure"
Aug 21 00:07:55.239: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a container client-container: 
STEP: delete the pod
Aug 21 00:07:55.681: INFO: Waiting for pod downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a to disappear
Aug 21 00:07:55.695: INFO: Pod downwardapi-volume-7cd1dc7a-bdd3-46b6-8e0b-3b94e27d385a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:07:55.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5773" for this suite.

• [SLOW TEST:7.700 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4536,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:07:55.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:07:55.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02" in namespace "downward-api-5958" to be "success or failure"
Aug 21 00:07:55.911: INFO: Pod "downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02": Phase="Pending", Reason="", readiness=false. Elapsed: 75.733861ms
Aug 21 00:07:58.080: INFO: Pod "downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244648961s
Aug 21 00:08:00.085: INFO: Pod "downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.249675994s
STEP: Saw pod success
Aug 21 00:08:00.085: INFO: Pod "downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02" satisfied condition "success or failure"
Aug 21 00:08:00.088: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02 container client-container: 
STEP: delete the pod
Aug 21 00:08:00.128: INFO: Waiting for pod downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02 to disappear
Aug 21 00:08:00.162: INFO: Pod downwardapi-volume-f3a2c82f-3b9e-4eda-b70a-6913a4fadc02 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:08:00.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5958" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4549,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:08:00.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:08:00.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747" in namespace "projected-1141" to be "success or failure"
Aug 21 00:08:00.230: INFO: Pod "downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013549ms
Aug 21 00:08:02.331: INFO: Pod "downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10325402s
Aug 21 00:08:04.335: INFO: Pod "downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107891165s
STEP: Saw pod success
Aug 21 00:08:04.335: INFO: Pod "downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747" satisfied condition "success or failure"
Aug 21 00:08:04.339: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747 container client-container: 
STEP: delete the pod
Aug 21 00:08:04.366: INFO: Waiting for pod downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747 to disappear
Aug 21 00:08:04.397: INFO: Pod downwardapi-volume-c4336d9a-e308-4689-9484-ec4b88e78747 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:08:04.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1141" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4564,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 00:08:04.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 00:08:12.540: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 00:08:12.547: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 00:08:14.547: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 00:08:14.552: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 00:08:16.547: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 00:08:16.551: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 00:08:16.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6097" for this suite.

• [SLOW TEST:12.155 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4566,"failed":0}
Aug 21 00:08:16.560: INFO: Running AfterSuite actions on all nodes
Aug 21 00:08:16.560: INFO: Running AfterSuite actions on node 1
Aug 21 00:08:16.560: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 4855.836 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS