I0826 14:03:51.871693 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0826 14:03:51.876476 7 e2e.go:109] Starting e2e run "8d1f7caf-4170-474c-8408-2dd603ddf8f0" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598450616 - Will randomize all specs Will run 278 of 4844 specs Aug 26 14:03:52.441: INFO: >>> kubeConfig: /root/.kube/config Aug 26 14:03:52.495: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 26 14:03:52.866: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 26 14:03:53.586: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 26 14:03:53.586: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 26 14:03:53.586: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 26 14:03:53.633: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 26 14:03:53.633: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 26 14:03:53.633: INFO: e2e test version: v1.17.11 Aug 26 14:03:53.638: INFO: kube-apiserver version: v1.17.5 Aug 26 14:03:53.641: INFO: >>> kubeConfig: /root/.kube/config Aug 26 14:03:53.659: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:03:53.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Aug 26 14:03:54.231: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-102d7f68-95ef-466b-9ba4-cdac4448415e STEP: Creating a pod to test consume configMaps Aug 26 14:03:54.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd" in namespace "configmap-5546" to be "success or failure" Aug 26 14:03:54.301: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.195862ms Aug 26 14:03:56.337: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06186018s Aug 26 14:03:58.564: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287947652s Aug 26 14:04:00.615: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339851704s Aug 26 14:04:03.158: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Running", Reason="", readiness=true. Elapsed: 8.882907518s Aug 26 14:04:05.228: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.951989486s STEP: Saw pod success Aug 26 14:04:05.228: INFO: Pod "pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd" satisfied condition "success or failure" Aug 26 14:04:05.557: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd container configmap-volume-test: STEP: delete the pod Aug 26 14:04:06.838: INFO: Waiting for pod pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd to disappear Aug 26 14:04:07.147: INFO: Pod pod-configmaps-f1e54e62-ca33-41ad-8a1c-51f3f29b40bd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:04:07.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5546" for this suite. • [SLOW TEST:14.121 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":9,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:04:07.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:04:08.487: INFO: Creating ReplicaSet my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20 Aug 26 14:04:08.967: INFO: Pod name my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20: Found 0 pods out of 1 Aug 26 14:04:14.007: INFO: Pod name my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20: Found 1 pods out of 1 Aug 26 14:04:14.007: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20" is running Aug 26 14:04:16.091: INFO: Pod "my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20-k74h5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:04:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:04:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:04:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:04:08 +0000 UTC Reason: Message:}]) Aug 26 14:04:16.094: INFO: Trying to dial the pod Aug 26 14:04:21.118: INFO: Controller my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20: Got expected result from replica 1 [my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20-k74h5]: "my-hostname-basic-31d69adf-170f-4946-ab02-fd0d2e4a3d20-k74h5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:04:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4199" for this suite. • [SLOW TEST:13.340 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":2,"skipped":17,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:04:21.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 26 14:04:21.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0" in namespace "downward-api-5088" to be "success or failure" Aug 26 14:04:21.733: INFO: Pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 102.073604ms Aug 26 14:04:24.283: INFO: Pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.651587496s Aug 26 14:04:26.288: INFO: Pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.657405957s Aug 26 14:04:28.427: INFO: Pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.795905856s STEP: Saw pod success Aug 26 14:04:28.427: INFO: Pod "downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0" satisfied condition "success or failure" Aug 26 14:04:28.431: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0 container client-container: STEP: delete the pod Aug 26 14:04:29.246: INFO: Waiting for pod downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0 to disappear Aug 26 14:04:29.270: INFO: Pod downwardapi-volume-1d7fc67b-10df-40da-9d37-fe565c12fbd0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:04:29.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5088" for this suite. • [SLOW TEST:8.205 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":23,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:04:29.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-af6e8a8a-129a-4378-b72b-3c8c6a8b98c2 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:04:30.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7601" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":4,"skipped":29,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:04:30.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:04:31.247: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:04:36.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7997" for this suite. • [SLOW TEST:6.350 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":5,"skipped":34,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:04:36.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:05:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7512" for this suite. • [SLOW TEST:59.244 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":39,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:05:35.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 14:05:45.093: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 14:05:48.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047545, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047543, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:05:50.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047545, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047543, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:05:53.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047545, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047543, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:05:55.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047544, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047545, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047543, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 14:05:58.438: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:05:58.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2554-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:06:01.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2148" for this suite. STEP: Destroying namespace "webhook-2148-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:28.034 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":7,"skipped":39,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:06:03.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:06:04.800: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:06:10.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6085" for this suite. • [SLOW TEST:6.617 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":8,"skipped":53,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:06:10.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:07:11.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1792" for this suite. • [SLOW TEST:61.134 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":67,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:07:11.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 14:07:24.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 14:07:26.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:07:28.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047644, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 14:07:32.120: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:07:32.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-415" for this suite. STEP: Destroying namespace "webhook-415-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.966 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":10,"skipped":76,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:07:32.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:07:33.205: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 26 14:07:33.273: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:33.316: INFO: Number of nodes with available pods: 0 Aug 26 14:07:33.316: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:07:34.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:34.330: INFO: Number of nodes with available pods: 0 Aug 26 14:07:34.330: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:07:35.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:35.330: INFO: Number of nodes with available pods: 0 Aug 26 14:07:35.330: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:07:36.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:36.331: INFO: Number of nodes with available pods: 0 Aug 26 14:07:36.331: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:07:37.351: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:37.523: INFO: Number of nodes with available pods: 1 Aug 26 14:07:37.523: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:07:38.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:38.598: INFO: Number of nodes with available pods: 1 Aug 26 14:07:38.598: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:07:39.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:39.655: INFO: Number of nodes with available pods: 2 Aug 26 14:07:39.655: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 26 14:07:40.415: INFO: Wrong image for pod: daemon-set-26ffv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:40.416: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:40.427: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:41.503: INFO: Wrong image for pod: daemon-set-26ffv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:41.503: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:41.523: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:42.435: INFO: Wrong image for pod: daemon-set-26ffv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:42.436: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:42.442: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:43.661: INFO: Wrong image for pod: daemon-set-26ffv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:43.661: INFO: Pod daemon-set-26ffv is not available Aug 26 14:07:43.661: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:43.959: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:45.523: INFO: Wrong image for pod: daemon-set-26ffv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:45.524: INFO: Pod daemon-set-26ffv is not available Aug 26 14:07:45.524: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:45.560: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:46.434: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:46.435: INFO: Pod daemon-set-nvztl is not available Aug 26 14:07:46.441: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:47.435: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:47.436: INFO: Pod daemon-set-nvztl is not available Aug 26 14:07:47.442: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:48.582: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:48.582: INFO: Pod daemon-set-nvztl is not available Aug 26 14:07:48.622: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:49.647: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:49.647: INFO: Pod daemon-set-nvztl is not available Aug 26 14:07:49.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:50.562: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:50.562: INFO: Pod daemon-set-nvztl is not available Aug 26 14:07:50.573: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:51.433: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:51.438: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:52.604: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:52.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:53.826: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:53.827: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:53.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:54.436: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:54.436: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:54.446: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:55.434: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:55.435: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:55.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:56.533: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:56.533: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:56.759: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:57.434: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:57.434: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:57.442: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:58.433: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:58.433: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:58.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:07:59.835: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:07:59.835: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:07:59.968: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:00.436: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:08:00.436: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:08:00.444: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:01.435: INFO: Wrong image for pod: daemon-set-dkgh2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 26 14:08:01.435: INFO: Pod daemon-set-dkgh2 is not available Aug 26 14:08:01.441: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:02.578: INFO: Pod daemon-set-6lrr7 is not available Aug 26 14:08:02.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 26 14:08:02.866: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:02.870: INFO: Number of nodes with available pods: 1 Aug 26 14:08:02.870: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:03.914: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:04.228: INFO: Number of nodes with available pods: 1 Aug 26 14:08:04.228: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:05.090: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:05.456: INFO: Number of nodes with available pods: 1 Aug 26 14:08:05.456: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:05.880: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:05.923: INFO: Number of nodes with available pods: 1 Aug 26 14:08:05.923: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:07.031: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:07.091: INFO: Number of nodes with available pods: 1 Aug 26 14:08:07.091: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:08.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:08.397: INFO: Number of nodes with available pods: 1 Aug 26 14:08:08.397: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:08.879: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:08.885: INFO: Number of nodes with available pods: 1 Aug 26 14:08:08.885: INFO: Node jerma-worker is running more than one daemon pod Aug 26 14:08:09.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 14:08:10.415: INFO: Number of nodes with available pods: 2 Aug 26 14:08:10.416: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4403, will wait for the garbage collector to delete the pods Aug 26 14:08:10.721: INFO: Deleting DaemonSet.extensions daemon-set took: 7.728715ms Aug 26 14:08:11.123: INFO: Terminating DaemonSet.extensions daemon-set pods took: 402.242053ms Aug 26 14:08:16.633: INFO: Number of nodes with available pods: 0 Aug 26 14:08:16.634: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 14:08:16.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4403/daemonsets","resourceVersion":"3892083"},"items":null} Aug 26 14:08:16.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4403/pods","resourceVersion":"3892084"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:08:16.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4403" for this suite. • [SLOW TEST:44.068 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":11,"skipped":81,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:08:16.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:08:17.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:08:18.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9556" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":12,"skipped":84,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:08:19.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 26 14:08:21.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731" in namespace "downward-api-936" to be "success or failure" Aug 26 14:08:22.521: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Pending", Reason="", readiness=false. Elapsed: 1.039622433s Aug 26 14:08:24.898: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Pending", Reason="", readiness=false. Elapsed: 3.416729355s Aug 26 14:08:27.738: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256468865s Aug 26 14:08:30.170: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Pending", Reason="", readiness=false. Elapsed: 8.688351468s Aug 26 14:08:32.438: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Running", Reason="", readiness=true. Elapsed: 10.956707373s Aug 26 14:08:34.447: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.965442214s STEP: Saw pod success Aug 26 14:08:34.447: INFO: Pod "downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731" satisfied condition "success or failure" Aug 26 14:08:34.524: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731 container client-container: STEP: delete the pod Aug 26 14:08:34.994: INFO: Waiting for pod downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731 to disappear Aug 26 14:08:35.062: INFO: Pod downwardapi-volume-8aeb2ca4-8efb-4c59-b370-80f32daa7731 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:08:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-936" for this suite. • [SLOW TEST:15.886 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":94,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:08:35.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 26 14:08:36.069: INFO: Waiting up to 5m0s for pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9" in namespace "emptydir-1251" to be "success or failure" Aug 26 14:08:36.227: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 157.709172ms Aug 26 14:08:38.300: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2305914s Aug 26 14:08:40.360: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290690478s Aug 26 14:08:42.366: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9": Phase="Running", Reason="", readiness=true. Elapsed: 6.296477276s Aug 26 14:08:44.372: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.302477221s STEP: Saw pod success Aug 26 14:08:44.372: INFO: Pod "pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9" satisfied condition "success or failure" Aug 26 14:08:44.376: INFO: Trying to get logs from node jerma-worker pod pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9 container test-container: STEP: delete the pod Aug 26 14:08:44.601: INFO: Waiting for pod pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9 to disappear Aug 26 14:08:44.868: INFO: Pod pod-b48f2f7e-265d-42ce-b72e-018e9d359ff9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:08:44.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1251" for this suite. • [SLOW TEST:10.057 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":98,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:08:45.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 26 14:08:46.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db" in namespace "downward-api-1758" to be "success or failure" Aug 26 14:08:46.821: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db": Phase="Pending", Reason="", readiness=false. Elapsed: 176.95433ms Aug 26 14:08:48.828: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184279977s Aug 26 14:08:51.384: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739807338s Aug 26 14:08:53.449: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805668687s Aug 26 14:08:55.455: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.810884898s STEP: Saw pod success Aug 26 14:08:55.455: INFO: Pod "downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db" satisfied condition "success or failure" Aug 26 14:08:55.458: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db container client-container: STEP: delete the pod Aug 26 14:08:55.815: INFO: Waiting for pod downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db to disappear Aug 26 14:08:55.834: INFO: Pod downwardapi-volume-dcec2318-6f54-4a31-a2c0-f99da408d7db no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:08:55.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1758" for this suite. • [SLOW TEST:10.667 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":101,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:08:55.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7264 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 26 14:08:55.991: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 26 14:09:32.494: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.223 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:09:32.494: INFO: >>> kubeConfig: /root/.kube/config I0826 14:09:32.663750 7 log.go:172] (0x6afd260) (0x6afd2d0) Create stream I0826 14:09:32.664078 7 log.go:172] (0x6afd260) (0x6afd2d0) Stream added, broadcasting: 1 I0826 14:09:32.678428 7 log.go:172] (0x6afd260) Reply frame received for 1 I0826 14:09:32.678990 7 log.go:172] (0x6afd260) (0x7c1fc70) Create stream I0826 14:09:32.679072 7 log.go:172] (0x6afd260) (0x7c1fc70) Stream added, broadcasting: 3 I0826 14:09:32.680958 7 log.go:172] (0x6afd260) Reply frame received for 3 I0826 14:09:32.681256 7 log.go:172] (0x6afd260) (0x6afd500) Create stream I0826 14:09:32.681317 7 log.go:172] (0x6afd260) (0x6afd500) Stream added, broadcasting: 5 I0826 14:09:32.682226 7 log.go:172] (0x6afd260) Reply frame received for 5 I0826 14:09:33.987126 7 log.go:172] (0x6afd260) Data frame received for 3 I0826 14:09:33.987665 7 log.go:172] (0x7c1fc70) (3) Data frame handling I0826 14:09:33.988589 7 log.go:172] (0x7c1fc70) (3) Data frame sent I0826 14:09:33.988961 7 log.go:172] (0x6afd260) Data frame received for 1 I0826 14:09:33.989109 7 log.go:172] (0x6afd2d0) (1) Data frame handling I0826 14:09:33.989239 7 log.go:172] (0x6afd2d0) (1) Data frame sent I0826 14:09:33.989345 7 log.go:172] (0x6afd260) Data frame received for 3 I0826 14:09:33.989466 7 log.go:172] (0x7c1fc70) (3) Data frame handling I0826 14:09:33.989632 7 log.go:172] (0x6afd260) Data frame received for 5 I0826 14:09:33.989822 7 log.go:172] (0x6afd500) (5) Data frame handling I0826 14:09:33.991847 7 log.go:172] (0x6afd260) (0x6afd2d0) Stream removed, broadcasting: 1 I0826 14:09:33.992118 7 log.go:172] (0x6afd260) Go away received I0826 14:09:33.993970 7 log.go:172] (0x6afd260) (0x6afd2d0) Stream removed, broadcasting: 1 I0826 14:09:33.994169 7 log.go:172] (0x6afd260) (0x7c1fc70) Stream removed, broadcasting: 3 I0826 14:09:33.994339 7 log.go:172] (0x6afd260) (0x6afd500) Stream removed, broadcasting: 5 Aug 26 14:09:33.994: INFO: Found all expected endpoints: [netserver-0] Aug 26 14:09:34.205: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.143 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:09:34.205: INFO: >>> kubeConfig: /root/.kube/config I0826 14:09:34.496602 7 log.go:172] (0x80087e0) (0x8008850) Create stream I0826 14:09:34.496878 7 log.go:172] (0x80087e0) (0x8008850) Stream added, broadcasting: 1 I0826 14:09:34.505895 7 log.go:172] (0x80087e0) Reply frame received for 1 I0826 14:09:34.506171 7 log.go:172] (0x80087e0) (0x6afd8f0) Create stream I0826 14:09:34.506283 7 log.go:172] (0x80087e0) (0x6afd8f0) Stream added, broadcasting: 3 I0826 14:09:34.507927 7 log.go:172] (0x80087e0) Reply frame received for 3 I0826 14:09:34.508099 7 log.go:172] (0x80087e0) (0x7e78150) Create stream I0826 14:09:34.508193 7 log.go:172] (0x80087e0) (0x7e78150) Stream added, broadcasting: 5 I0826 14:09:34.509484 7 log.go:172] (0x80087e0) Reply frame received for 5 I0826 14:09:35.579183 7 log.go:172] (0x80087e0) Data frame received for 5 I0826 14:09:35.579315 7 log.go:172] (0x7e78150) (5) Data frame handling I0826 14:09:35.579674 7 log.go:172] (0x80087e0) Data frame received for 3 I0826 14:09:35.579777 7 log.go:172] (0x6afd8f0) (3) Data frame handling I0826 14:09:35.579873 7 log.go:172] (0x6afd8f0) (3) Data frame sent I0826 14:09:35.579935 7 log.go:172] (0x80087e0) Data frame received for 3 I0826 14:09:35.579989 7 log.go:172] (0x6afd8f0) (3) Data frame handling I0826 14:09:35.581714 7 log.go:172] (0x80087e0) Data frame received for 1 I0826 14:09:35.581816 7 log.go:172] (0x8008850) (1) Data frame handling I0826 14:09:35.581916 7 log.go:172] (0x8008850) (1) Data frame sent I0826 14:09:35.581992 7 log.go:172] (0x80087e0) (0x8008850) Stream removed, broadcasting: 1 I0826 14:09:35.582376 7 log.go:172] (0x80087e0) (0x8008850) Stream removed, broadcasting: 1 I0826 14:09:35.582457 7 log.go:172] (0x80087e0) (0x6afd8f0) Stream removed, broadcasting: 3 I0826 14:09:35.582538 7 log.go:172] (0x80087e0) (0x7e78150) Stream removed, broadcasting: 5 Aug 26 14:09:35.582: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:09:35.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0826 14:09:35.583887 7 log.go:172] (0x80087e0) Go away received STEP: Destroying namespace "pod-network-test-7264" for this suite. • [SLOW TEST:40.064 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:09:35.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 26 14:09:49.278: INFO: Successfully updated pod "annotationupdateadfa3cc7-32c9-4323-b8bc-6d2853a10b14" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:09:51.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-224" for this suite. • [SLOW TEST:15.605 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":169,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:09:51.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:09:57.817: INFO: Waiting up to 5m0s for pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4" in namespace "pods-2088" to be "success or failure" Aug 26 14:09:58.340: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Pending", Reason="", readiness=false. Elapsed: 523.199312ms Aug 26 14:10:01.674: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.856937388s Aug 26 14:10:03.762: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.944415296s Aug 26 14:10:05.769: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.951452488s Aug 26 14:10:07.840: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023016733s Aug 26 14:10:09.847: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029450643s STEP: Saw pod success Aug 26 14:10:09.847: INFO: Pod "client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4" satisfied condition "success or failure" Aug 26 14:10:09.851: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4 container env3cont: STEP: delete the pod Aug 26 14:10:09.958: INFO: Waiting for pod client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4 to disappear Aug 26 14:10:09.969: INFO: Pod client-envvars-2c3335fc-332e-4cb7-9851-22389e2487f4 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:10:09.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2088" for this suite. • [SLOW TEST:18.458 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":183,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:10:09.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:10:10.073: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:10:16.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7335" for this suite. • [SLOW TEST:6.633 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":186,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:10:16.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 26 14:10:23.207: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 26 14:10:25.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047824, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:10:28.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047824, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:10:29.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047824, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:10:31.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047824, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047823, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 14:10:34.726: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:10:35.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:10:39.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2751" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:24.789 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":20,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:10:41.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 26 14:10:42.120: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:10:56.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7852" for this suite. • [SLOW TEST:14.859 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":21,"skipped":210,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:10:56.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:11:25.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9186" for this suite. • [SLOW TEST:29.616 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":22,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:11:25.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6329 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 26 14:11:29.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 26 14:12:04.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.2.233&port=8081&tries=1'] Namespace:pod-network-test-6329 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:04.801: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:05.005625 7 log.go:172] (0x821ed90) (0x821f030) Create stream I0826 14:12:05.006066 7 log.go:172] (0x821ed90) (0x821f030) Stream added, broadcasting: 1 I0826 14:12:05.012314 7 log.go:172] (0x821ed90) Reply frame received for 1 I0826 14:12:05.012543 7 log.go:172] (0x821ed90) (0x821ff10) Create stream I0826 14:12:05.012653 7 log.go:172] (0x821ed90) (0x821ff10) Stream added, broadcasting: 3 I0826 14:12:05.014595 7 log.go:172] (0x821ed90) Reply frame received for 3 I0826 14:12:05.014811 7 log.go:172] (0x821ed90) (0x66963f0) Create stream I0826 14:12:05.014932 7 log.go:172] (0x821ed90) (0x66963f0) Stream added, broadcasting: 5 I0826 14:12:05.016444 7 log.go:172] (0x821ed90) Reply frame received for 5 I0826 14:12:05.123497 7 log.go:172] (0x821ed90) Data frame received for 3 I0826 14:12:05.123751 7 log.go:172] (0x821ff10) (3) Data frame handling I0826 14:12:05.123974 7 log.go:172] (0x821ff10) (3) Data frame sent I0826 14:12:05.124141 7 log.go:172] (0x821ed90) Data frame received for 3 I0826 14:12:05.124270 7 log.go:172] (0x821ff10) (3) Data frame handling I0826 14:12:05.124542 7 log.go:172] (0x821ed90) Data frame received for 5 I0826 14:12:05.124709 7 log.go:172] (0x66963f0) (5) Data frame handling I0826 14:12:05.125887 7 log.go:172] (0x821ed90) Data frame received for 1 I0826 14:12:05.126098 7 log.go:172] (0x821f030) (1) Data frame handling I0826 14:12:05.126237 7 log.go:172] (0x821f030) (1) Data frame sent I0826 14:12:05.126418 7 log.go:172] (0x821ed90) (0x821f030) Stream removed, broadcasting: 1 I0826 14:12:05.126672 7 log.go:172] (0x821ed90) Go away received I0826 14:12:05.127022 7 log.go:172] (0x821ed90) (0x821f030) Stream removed, broadcasting: 1 I0826 14:12:05.127149 7 log.go:172] (0x821ed90) (0x821ff10) Stream removed, broadcasting: 3 I0826 14:12:05.127520 7 log.go:172] (0x821ed90) (0x66963f0) Stream removed, broadcasting: 5 Aug 26 14:12:05.128: INFO: Waiting for responses: map[] Aug 26 14:12:05.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.1.148&port=8081&tries=1'] Namespace:pod-network-test-6329 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:05.135: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:05.235585 7 log.go:172] (0x7c1f260) (0x7c1f9d0) Create stream I0826 14:12:05.235774 7 log.go:172] (0x7c1f260) (0x7c1f9d0) Stream added, broadcasting: 1 I0826 14:12:05.240378 7 log.go:172] (0x7c1f260) Reply frame received for 1 I0826 14:12:05.240563 7 log.go:172] (0x7c1f260) (0x8008b60) Create stream I0826 14:12:05.240679 7 log.go:172] (0x7c1f260) (0x8008b60) Stream added, broadcasting: 3 I0826 14:12:05.242647 7 log.go:172] (0x7c1f260) Reply frame received for 3 I0826 14:12:05.242865 7 log.go:172] (0x7c1f260) (0x912a150) Create stream I0826 14:12:05.243001 7 log.go:172] (0x7c1f260) (0x912a150) Stream added, broadcasting: 5 I0826 14:12:05.244966 7 log.go:172] (0x7c1f260) Reply frame received for 5 I0826 14:12:05.319557 7 log.go:172] (0x7c1f260) Data frame received for 3 I0826 14:12:05.319760 7 log.go:172] (0x8008b60) (3) Data frame handling I0826 14:12:05.319926 7 log.go:172] (0x7c1f260) Data frame received for 5 I0826 14:12:05.320138 7 log.go:172] (0x912a150) (5) Data frame handling I0826 14:12:05.320369 7 log.go:172] (0x8008b60) (3) Data frame sent I0826 14:12:05.320639 7 log.go:172] (0x7c1f260) Data frame received for 3 I0826 14:12:05.320931 7 log.go:172] (0x8008b60) (3) Data frame handling I0826 14:12:05.321076 7 log.go:172] (0x7c1f260) Data frame received for 1 I0826 14:12:05.321295 7 log.go:172] (0x7c1f9d0) (1) Data frame handling I0826 14:12:05.321449 7 log.go:172] (0x7c1f9d0) (1) Data frame sent I0826 14:12:05.321638 7 log.go:172] (0x7c1f260) (0x7c1f9d0) Stream removed, broadcasting: 1 I0826 14:12:05.321828 7 log.go:172] (0x7c1f260) Go away received I0826 14:12:05.322060 7 log.go:172] (0x7c1f260) (0x7c1f9d0) Stream removed, broadcasting: 1 I0826 14:12:05.322152 7 log.go:172] (0x7c1f260) (0x8008b60) Stream removed, broadcasting: 3 I0826 14:12:05.322255 7 log.go:172] (0x7c1f260) (0x912a150) Stream removed, broadcasting: 5 Aug 26 14:12:05.322: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:12:05.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6329" for this suite. • [SLOW TEST:39.442 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:12:05.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 26 14:12:32.410: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:32.410: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:32.637903 7 log.go:172] (0x7052380) (0x70523f0) Create stream I0826 14:12:32.638054 7 log.go:172] (0x7052380) (0x70523f0) Stream added, broadcasting: 1 I0826 14:12:32.641052 7 log.go:172] (0x7052380) Reply frame received for 1 I0826 14:12:32.641176 7 log.go:172] (0x7052380) (0x6afc000) Create stream I0826 14:12:32.641228 7 log.go:172] (0x7052380) (0x6afc000) Stream added, broadcasting: 3 I0826 14:12:32.642185 7 log.go:172] (0x7052380) Reply frame received for 3 I0826 14:12:32.642305 7 log.go:172] (0x7052380) (0x70525b0) Create stream I0826 14:12:32.642378 7 log.go:172] (0x7052380) (0x70525b0) Stream added, broadcasting: 5 I0826 14:12:32.643424 7 log.go:172] (0x7052380) Reply frame received for 5 I0826 14:12:32.705975 7 log.go:172] (0x7052380) Data frame received for 5 I0826 14:12:32.706159 7 log.go:172] (0x70525b0) (5) Data frame handling I0826 14:12:32.706266 7 log.go:172] (0x7052380) Data frame received for 3 I0826 14:12:32.706387 7 log.go:172] (0x6afc000) (3) Data frame handling I0826 14:12:32.706517 7 log.go:172] (0x6afc000) (3) Data frame sent I0826 14:12:32.706638 7 log.go:172] (0x7052380) Data frame received for 3 I0826 14:12:32.706738 7 log.go:172] (0x6afc000) (3) Data frame handling I0826 14:12:32.707687 7 log.go:172] (0x7052380) Data frame received for 1 I0826 14:12:32.707819 7 log.go:172] (0x70523f0) (1) Data frame handling I0826 14:12:32.707929 7 log.go:172] (0x70523f0) (1) Data frame sent I0826 14:12:32.708039 7 log.go:172] (0x7052380) (0x70523f0) Stream removed, broadcasting: 1 I0826 14:12:32.708149 7 log.go:172] (0x7052380) Go away received I0826 14:12:32.708366 7 log.go:172] (0x7052380) (0x70523f0) Stream removed, broadcasting: 1 I0826 14:12:32.708482 7 log.go:172] (0x7052380) (0x6afc000) Stream removed, broadcasting: 3 I0826 14:12:32.708560 7 log.go:172] (0x7052380) (0x70525b0) Stream removed, broadcasting: 5 Aug 26 14:12:32.708: INFO: Exec stderr: "" Aug 26 14:12:32.709: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:32.709: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:33.429025 7 log.go:172] (0x7e78fc0) (0x7e79030) Create stream I0826 14:12:33.429213 7 log.go:172] (0x7e78fc0) (0x7e79030) Stream added, broadcasting: 1 I0826 14:12:33.433168 7 log.go:172] (0x7e78fc0) Reply frame received for 1 I0826 14:12:33.433426 7 log.go:172] (0x7e78fc0) (0x6afc230) Create stream I0826 14:12:33.433540 7 log.go:172] (0x7e78fc0) (0x6afc230) Stream added, broadcasting: 3 I0826 14:12:33.435295 7 log.go:172] (0x7e78fc0) Reply frame received for 3 I0826 14:12:33.435460 7 log.go:172] (0x7e78fc0) (0x6afc4d0) Create stream I0826 14:12:33.435552 7 log.go:172] (0x7e78fc0) (0x6afc4d0) Stream added, broadcasting: 5 I0826 14:12:33.437151 7 log.go:172] (0x7e78fc0) Reply frame received for 5 I0826 14:12:33.508321 7 log.go:172] (0x7e78fc0) Data frame received for 3 I0826 14:12:33.508519 7 log.go:172] (0x6afc230) (3) Data frame handling I0826 14:12:33.508619 7 log.go:172] (0x6afc230) (3) Data frame sent I0826 14:12:33.508702 7 log.go:172] (0x7e78fc0) Data frame received for 3 I0826 14:12:33.508859 7 log.go:172] (0x6afc230) (3) Data frame handling I0826 14:12:33.508947 7 log.go:172] (0x7e78fc0) Data frame received for 5 I0826 14:12:33.509038 7 log.go:172] (0x6afc4d0) (5) Data frame handling I0826 14:12:33.509329 7 log.go:172] (0x7e78fc0) Data frame received for 1 I0826 14:12:33.509450 7 log.go:172] (0x7e79030) (1) Data frame handling I0826 14:12:33.509564 7 log.go:172] (0x7e79030) (1) Data frame sent I0826 14:12:33.509680 7 log.go:172] (0x7e78fc0) (0x7e79030) Stream removed, broadcasting: 1 I0826 14:12:33.509786 7 log.go:172] (0x7e78fc0) Go away received I0826 14:12:33.510287 7 log.go:172] (0x7e78fc0) (0x7e79030) Stream removed, broadcasting: 1 I0826 14:12:33.510514 7 log.go:172] (0x7e78fc0) (0x6afc230) Stream removed, broadcasting: 3 I0826 14:12:33.510690 7 log.go:172] (0x7e78fc0) (0x6afc4d0) Stream removed, broadcasting: 5 Aug 26 14:12:33.510: INFO: Exec stderr: "" Aug 26 14:12:33.511: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:33.511: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:33.935261 7 log.go:172] (0x7052a80) (0x7052af0) Create stream I0826 14:12:33.935462 7 log.go:172] (0x7052a80) (0x7052af0) Stream added, broadcasting: 1 I0826 14:12:33.939446 7 log.go:172] (0x7052a80) Reply frame received for 1 I0826 14:12:33.939571 7 log.go:172] (0x7052a80) (0x7e79340) Create stream I0826 14:12:33.939633 7 log.go:172] (0x7052a80) (0x7e79340) Stream added, broadcasting: 3 I0826 14:12:33.940809 7 log.go:172] (0x7052a80) Reply frame received for 3 I0826 14:12:33.940945 7 log.go:172] (0x7052a80) (0x7052cb0) Create stream I0826 14:12:33.941053 7 log.go:172] (0x7052a80) (0x7052cb0) Stream added, broadcasting: 5 I0826 14:12:33.942387 7 log.go:172] (0x7052a80) Reply frame received for 5 I0826 14:12:34.026779 7 log.go:172] (0x7052a80) Data frame received for 3 I0826 14:12:34.026974 7 log.go:172] (0x7e79340) (3) Data frame handling I0826 14:12:34.027047 7 log.go:172] (0x7052a80) Data frame received for 5 I0826 14:12:34.027138 7 log.go:172] (0x7052cb0) (5) Data frame handling I0826 14:12:34.027260 7 log.go:172] (0x7e79340) (3) Data frame sent I0826 14:12:34.027424 7 log.go:172] (0x7052a80) Data frame received for 3 I0826 14:12:34.027554 7 log.go:172] (0x7e79340) (3) Data frame handling I0826 14:12:34.027880 7 log.go:172] (0x7052a80) Data frame received for 1 I0826 14:12:34.028009 7 log.go:172] (0x7052af0) (1) Data frame handling I0826 14:12:34.028149 7 log.go:172] (0x7052af0) (1) Data frame sent I0826 14:12:34.028299 7 log.go:172] (0x7052a80) (0x7052af0) Stream removed, broadcasting: 1 I0826 14:12:34.028477 7 log.go:172] (0x7052a80) Go away received I0826 14:12:34.029030 7 log.go:172] (0x7052a80) (0x7052af0) Stream removed, broadcasting: 1 I0826 14:12:34.029252 7 log.go:172] (0x7052a80) (0x7e79340) Stream removed, broadcasting: 3 I0826 14:12:34.029378 7 log.go:172] (0x7052a80) (0x7052cb0) Stream removed, broadcasting: 5 Aug 26 14:12:34.029: INFO: Exec stderr: "" Aug 26 14:12:34.029: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:34.029: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:34.191862 7 log.go:172] (0x8358b60) (0x8358bd0) Create stream I0826 14:12:34.191998 7 log.go:172] (0x8358b60) (0x8358bd0) Stream added, broadcasting: 1 I0826 14:12:34.196029 7 log.go:172] (0x8358b60) Reply frame received for 1 I0826 14:12:34.196208 7 log.go:172] (0x8358b60) (0x8358d90) Create stream I0826 14:12:34.196309 7 log.go:172] (0x8358b60) (0x8358d90) Stream added, broadcasting: 3 I0826 14:12:34.197821 7 log.go:172] (0x8358b60) Reply frame received for 3 I0826 14:12:34.197969 7 log.go:172] (0x8358b60) (0x7e79880) Create stream I0826 14:12:34.198065 7 log.go:172] (0x8358b60) (0x7e79880) Stream added, broadcasting: 5 I0826 14:12:34.199622 7 log.go:172] (0x8358b60) Reply frame received for 5 I0826 14:12:34.255208 7 log.go:172] (0x8358b60) Data frame received for 3 I0826 14:12:34.255358 7 log.go:172] (0x8358d90) (3) Data frame handling I0826 14:12:34.255454 7 log.go:172] (0x8358d90) (3) Data frame sent I0826 14:12:34.255535 7 log.go:172] (0x8358b60) Data frame received for 3 I0826 14:12:34.255605 7 log.go:172] (0x8358d90) (3) Data frame handling I0826 14:12:34.255717 7 log.go:172] (0x8358b60) Data frame received for 5 I0826 14:12:34.255861 7 log.go:172] (0x7e79880) (5) Data frame handling I0826 14:12:34.256533 7 log.go:172] (0x8358b60) Data frame received for 1 I0826 14:12:34.256653 7 log.go:172] (0x8358bd0) (1) Data frame handling I0826 14:12:34.256828 7 log.go:172] (0x8358bd0) (1) Data frame sent I0826 14:12:34.256906 7 log.go:172] (0x8358b60) (0x8358bd0) Stream removed, broadcasting: 1 I0826 14:12:34.257199 7 log.go:172] (0x8358b60) Go away received I0826 14:12:34.257320 7 log.go:172] (0x8358b60) (0x8358bd0) Stream removed, broadcasting: 1 I0826 14:12:34.257443 7 log.go:172] (0x8358b60) (0x8358d90) Stream removed, broadcasting: 3 I0826 14:12:34.257567 7 log.go:172] (0x8358b60) (0x7e79880) Stream removed, broadcasting: 5 Aug 26 14:12:34.257: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 26 14:12:34.257: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:34.257: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:34.351068 7 log.go:172] (0x83595e0) (0x8359650) Create stream I0826 14:12:34.351354 7 log.go:172] (0x83595e0) (0x8359650) Stream added, broadcasting: 1 I0826 14:12:34.356619 7 log.go:172] (0x83595e0) Reply frame received for 1 I0826 14:12:34.357076 7 log.go:172] (0x83595e0) (0x7053110) Create stream I0826 14:12:34.357289 7 log.go:172] (0x83595e0) (0x7053110) Stream added, broadcasting: 3 I0826 14:12:34.361888 7 log.go:172] (0x83595e0) Reply frame received for 3 I0826 14:12:34.362437 7 log.go:172] (0x83595e0) (0x70532d0) Create stream I0826 14:12:34.362606 7 log.go:172] (0x83595e0) (0x70532d0) Stream added, broadcasting: 5 I0826 14:12:34.364190 7 log.go:172] (0x83595e0) Reply frame received for 5 I0826 14:12:34.423508 7 log.go:172] (0x83595e0) Data frame received for 5 I0826 14:12:34.423674 7 log.go:172] (0x70532d0) (5) Data frame handling I0826 14:12:34.423746 7 log.go:172] (0x83595e0) Data frame received for 3 I0826 14:12:34.423824 7 log.go:172] (0x7053110) (3) Data frame handling I0826 14:12:34.423909 7 log.go:172] (0x7053110) (3) Data frame sent I0826 14:12:34.423990 7 log.go:172] (0x83595e0) Data frame received for 3 I0826 14:12:34.424076 7 log.go:172] (0x7053110) (3) Data frame handling I0826 14:12:34.424360 7 log.go:172] (0x83595e0) Data frame received for 1 I0826 14:12:34.424507 7 log.go:172] (0x8359650) (1) Data frame handling I0826 14:12:34.424693 7 log.go:172] (0x8359650) (1) Data frame sent I0826 14:12:34.424919 7 log.go:172] (0x83595e0) (0x8359650) Stream removed, broadcasting: 1 I0826 14:12:34.425066 7 log.go:172] (0x83595e0) Go away received I0826 14:12:34.425262 7 log.go:172] (0x83595e0) (0x8359650) Stream removed, broadcasting: 1 I0826 14:12:34.425329 7 log.go:172] (0x83595e0) (0x7053110) Stream removed, broadcasting: 3 I0826 14:12:34.425387 7 log.go:172] (0x83595e0) (0x70532d0) Stream removed, broadcasting: 5 Aug 26 14:12:34.425: INFO: Exec stderr: "" Aug 26 14:12:34.425: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:34.425: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:34.772316 7 log.go:172] (0x6afcfc0) (0x6afd030) Create stream I0826 14:12:34.772556 7 log.go:172] (0x6afcfc0) (0x6afd030) Stream added, broadcasting: 1 I0826 14:12:34.778392 7 log.go:172] (0x6afcfc0) Reply frame received for 1 I0826 14:12:34.778645 7 log.go:172] (0x6afcfc0) (0x7e79d50) Create stream I0826 14:12:34.778733 7 log.go:172] (0x6afcfc0) (0x7e79d50) Stream added, broadcasting: 3 I0826 14:12:34.780375 7 log.go:172] (0x6afcfc0) Reply frame received for 3 I0826 14:12:34.780529 7 log.go:172] (0x6afcfc0) (0x7053730) Create stream I0826 14:12:34.780618 7 log.go:172] (0x6afcfc0) (0x7053730) Stream added, broadcasting: 5 I0826 14:12:34.782095 7 log.go:172] (0x6afcfc0) Reply frame received for 5 I0826 14:12:34.849955 7 log.go:172] (0x6afcfc0) Data frame received for 5 I0826 14:12:34.850101 7 log.go:172] (0x7053730) (5) Data frame handling I0826 14:12:34.850225 7 log.go:172] (0x6afcfc0) Data frame received for 3 I0826 14:12:34.850355 7 log.go:172] (0x7e79d50) (3) Data frame handling I0826 14:12:34.850470 7 log.go:172] (0x7e79d50) (3) Data frame sent I0826 14:12:34.850591 7 log.go:172] (0x6afcfc0) Data frame received for 3 I0826 14:12:34.850676 7 log.go:172] (0x7e79d50) (3) Data frame handling I0826 14:12:34.851588 7 log.go:172] (0x6afcfc0) Data frame received for 1 I0826 14:12:34.851673 7 log.go:172] (0x6afd030) (1) Data frame handling I0826 14:12:34.851782 7 log.go:172] (0x6afd030) (1) Data frame sent I0826 14:12:34.851890 7 log.go:172] (0x6afcfc0) (0x6afd030) Stream removed, broadcasting: 1 I0826 14:12:34.851990 7 log.go:172] (0x6afcfc0) Go away received I0826 14:12:34.852433 7 log.go:172] (0x6afcfc0) (0x6afd030) Stream removed, broadcasting: 1 I0826 14:12:34.852584 7 log.go:172] (0x6afcfc0) (0x7e79d50) Stream removed, broadcasting: 3 I0826 14:12:34.852678 7 log.go:172] (0x6afcfc0) (0x7053730) Stream removed, broadcasting: 5 Aug 26 14:12:34.852: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 26 14:12:34.853: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:34.853: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:34.990144 7 log.go:172] (0x86723f0) (0x86724d0) Create stream I0826 14:12:34.990269 7 log.go:172] (0x86723f0) (0x86724d0) Stream added, broadcasting: 1 I0826 14:12:34.993947 7 log.go:172] (0x86723f0) Reply frame received for 1 I0826 14:12:34.994094 7 log.go:172] (0x86723f0) (0x912a690) Create stream I0826 14:12:34.994169 7 log.go:172] (0x86723f0) (0x912a690) Stream added, broadcasting: 3 I0826 14:12:34.995429 7 log.go:172] (0x86723f0) Reply frame received for 3 I0826 14:12:34.995560 7 log.go:172] (0x86723f0) (0x8672690) Create stream I0826 14:12:34.995635 7 log.go:172] (0x86723f0) (0x8672690) Stream added, broadcasting: 5 I0826 14:12:34.996690 7 log.go:172] (0x86723f0) Reply frame received for 5 I0826 14:12:35.060718 7 log.go:172] (0x86723f0) Data frame received for 3 I0826 14:12:35.061009 7 log.go:172] (0x912a690) (3) Data frame handling I0826 14:12:35.061110 7 log.go:172] (0x912a690) (3) Data frame sent I0826 14:12:35.061235 7 log.go:172] (0x86723f0) Data frame received for 3 I0826 14:12:35.061336 7 log.go:172] (0x912a690) (3) Data frame handling I0826 14:12:35.061505 7 log.go:172] (0x86723f0) Data frame received for 5 I0826 14:12:35.061735 7 log.go:172] (0x8672690) (5) Data frame handling I0826 14:12:35.062603 7 log.go:172] (0x86723f0) Data frame received for 1 I0826 14:12:35.062730 7 log.go:172] (0x86724d0) (1) Data frame handling I0826 14:12:35.062851 7 log.go:172] (0x86724d0) (1) Data frame sent I0826 14:12:35.062987 7 log.go:172] (0x86723f0) (0x86724d0) Stream removed, broadcasting: 1 I0826 14:12:35.063131 7 log.go:172] (0x86723f0) Go away received I0826 14:12:35.063395 7 log.go:172] (0x86723f0) (0x86724d0) Stream removed, broadcasting: 1 I0826 14:12:35.063485 7 log.go:172] (0x86723f0) (0x912a690) Stream removed, broadcasting: 3 I0826 14:12:35.063567 7 log.go:172] (0x86723f0) (0x8672690) Stream removed, broadcasting: 5 Aug 26 14:12:35.063: INFO: Exec stderr: "" Aug 26 14:12:35.063: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:35.063: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:35.277924 7 log.go:172] (0x8359ab0) (0x8359b20) Create stream I0826 14:12:35.278045 7 log.go:172] (0x8359ab0) (0x8359b20) Stream added, broadcasting: 1 I0826 14:12:35.281908 7 log.go:172] (0x8359ab0) Reply frame received for 1 I0826 14:12:35.282083 7 log.go:172] (0x8359ab0) (0x912b180) Create stream I0826 14:12:35.282246 7 log.go:172] (0x8359ab0) (0x912b180) Stream added, broadcasting: 3 I0826 14:12:35.283815 7 log.go:172] (0x8359ab0) Reply frame received for 3 I0826 14:12:35.284016 7 log.go:172] (0x8359ab0) (0x8359ce0) Create stream I0826 14:12:35.284152 7 log.go:172] (0x8359ab0) (0x8359ce0) Stream added, broadcasting: 5 I0826 14:12:35.285654 7 log.go:172] (0x8359ab0) Reply frame received for 5 I0826 14:12:35.351616 7 log.go:172] (0x8359ab0) Data frame received for 5 I0826 14:12:35.351764 7 log.go:172] (0x8359ce0) (5) Data frame handling I0826 14:12:35.351889 7 log.go:172] (0x8359ab0) Data frame received for 3 I0826 14:12:35.351957 7 log.go:172] (0x912b180) (3) Data frame handling I0826 14:12:35.352065 7 log.go:172] (0x912b180) (3) Data frame sent I0826 14:12:35.352122 7 log.go:172] (0x8359ab0) Data frame received for 3 I0826 14:12:35.352176 7 log.go:172] (0x912b180) (3) Data frame handling I0826 14:12:35.352476 7 log.go:172] (0x8359ab0) Data frame received for 1 I0826 14:12:35.352541 7 log.go:172] (0x8359b20) (1) Data frame handling I0826 14:12:35.352601 7 log.go:172] (0x8359b20) (1) Data frame sent I0826 14:12:35.352678 7 log.go:172] (0x8359ab0) (0x8359b20) Stream removed, broadcasting: 1 I0826 14:12:35.352823 7 log.go:172] (0x8359ab0) Go away received I0826 14:12:35.353176 7 log.go:172] (0x8359ab0) (0x8359b20) Stream removed, broadcasting: 1 I0826 14:12:35.353260 7 log.go:172] (0x8359ab0) (0x912b180) Stream removed, broadcasting: 3 I0826 14:12:35.353332 7 log.go:172] (0x8359ab0) (0x8359ce0) Stream removed, broadcasting: 5 Aug 26 14:12:35.353: INFO: Exec stderr: "" Aug 26 14:12:35.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:35.353: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:35.449060 7 log.go:172] (0x7053ea0) (0x7053f10) Create stream I0826 14:12:35.449284 7 log.go:172] (0x7053ea0) (0x7053f10) Stream added, broadcasting: 1 I0826 14:12:35.460084 7 log.go:172] (0x7053ea0) Reply frame received for 1 I0826 14:12:35.460217 7 log.go:172] (0x7053ea0) (0x912b7a0) Create stream I0826 14:12:35.460273 7 log.go:172] (0x7053ea0) (0x912b7a0) Stream added, broadcasting: 3 I0826 14:12:35.461715 7 log.go:172] (0x7053ea0) Reply frame received for 3 I0826 14:12:35.461870 7 log.go:172] (0x7053ea0) (0x86728c0) Create stream I0826 14:12:35.461953 7 log.go:172] (0x7053ea0) (0x86728c0) Stream added, broadcasting: 5 I0826 14:12:35.463201 7 log.go:172] (0x7053ea0) Reply frame received for 5 I0826 14:12:35.516232 7 log.go:172] (0x7053ea0) Data frame received for 5 I0826 14:12:35.516342 7 log.go:172] (0x86728c0) (5) Data frame handling I0826 14:12:35.516506 7 log.go:172] (0x7053ea0) Data frame received for 3 I0826 14:12:35.516568 7 log.go:172] (0x912b7a0) (3) Data frame handling I0826 14:12:35.516649 7 log.go:172] (0x912b7a0) (3) Data frame sent I0826 14:12:35.516824 7 log.go:172] (0x7053ea0) Data frame received for 3 I0826 14:12:35.516912 7 log.go:172] (0x912b7a0) (3) Data frame handling I0826 14:12:35.518089 7 log.go:172] (0x7053ea0) Data frame received for 1 I0826 14:12:35.518165 7 log.go:172] (0x7053f10) (1) Data frame handling I0826 14:12:35.518277 7 log.go:172] (0x7053f10) (1) Data frame sent I0826 14:12:35.518370 7 log.go:172] (0x7053ea0) (0x7053f10) Stream removed, broadcasting: 1 I0826 14:12:35.518660 7 log.go:172] (0x7053ea0) Go away received I0826 14:12:35.518929 7 log.go:172] (0x7053ea0) (0x7053f10) Stream removed, broadcasting: 1 I0826 14:12:35.519108 7 log.go:172] (0x7053ea0) (0x912b7a0) Stream removed, broadcasting: 3 I0826 14:12:35.519261 7 log.go:172] (0x7053ea0) (0x86728c0) Stream removed, broadcasting: 5 Aug 26 14:12:35.519: INFO: Exec stderr: "" Aug 26 14:12:35.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8603 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 14:12:35.519: INFO: >>> kubeConfig: /root/.kube/config I0826 14:12:35.618093 7 log.go:172] (0x8672d20) (0x8672d90) Create stream I0826 14:12:35.618241 7 log.go:172] (0x8672d20) (0x8672d90) Stream added, broadcasting: 1 I0826 14:12:35.622775 7 log.go:172] (0x8672d20) Reply frame received for 1 I0826 14:12:35.622959 7 log.go:172] (0x8672d20) (0x912bb90) Create stream I0826 14:12:35.623047 7 log.go:172] (0x8672d20) (0x912bb90) Stream added, broadcasting: 3 I0826 14:12:35.624495 7 log.go:172] (0x8672d20) Reply frame received for 3 I0826 14:12:35.624689 7 log.go:172] (0x8672d20) (0x912bd50) Create stream I0826 14:12:35.624853 7 log.go:172] (0x8672d20) (0x912bd50) Stream added, broadcasting: 5 I0826 14:12:35.626238 7 log.go:172] (0x8672d20) Reply frame received for 5 I0826 14:12:35.672865 7 log.go:172] (0x8672d20) Data frame received for 5 I0826 14:12:35.673008 7 log.go:172] (0x912bd50) (5) Data frame handling I0826 14:12:35.673199 7 log.go:172] (0x8672d20) Data frame received for 3 I0826 14:12:35.673415 7 log.go:172] (0x912bb90) (3) Data frame handling I0826 14:12:35.673648 7 log.go:172] (0x912bb90) (3) Data frame sent I0826 14:12:35.673834 7 log.go:172] (0x8672d20) Data frame received for 3 I0826 14:12:35.674010 7 log.go:172] (0x912bb90) (3) Data frame handling I0826 14:12:35.674199 7 log.go:172] (0x8672d20) Data frame received for 1 I0826 14:12:35.674373 7 log.go:172] (0x8672d90) (1) Data frame handling I0826 14:12:35.674512 7 log.go:172] (0x8672d90) (1) Data frame sent I0826 14:12:35.674636 7 log.go:172] (0x8672d20) (0x8672d90) Stream removed, broadcasting: 1 I0826 14:12:35.674787 7 log.go:172] (0x8672d20) Go away received I0826 14:12:35.675205 7 log.go:172] (0x8672d20) (0x8672d90) Stream removed, broadcasting: 1 I0826 14:12:35.675335 7 log.go:172] (0x8672d20) (0x912bb90) Stream removed, broadcasting: 3 I0826 14:12:35.675423 7 log.go:172] (0x8672d20) (0x912bd50) Stream removed, broadcasting: 5 Aug 26 14:12:35.675: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:12:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8603" for this suite. • [SLOW TEST:30.350 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:12:35.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:12:35.983: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 26 14:12:41.483: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 26 14:12:43.986: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 26 14:12:46.068: INFO: Creating deployment "test-rollover-deployment" Aug 26 14:12:46.332: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 26 14:12:48.535: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 26 14:12:48.543: INFO: Ensure that both replica sets have 1 created replica Aug 26 14:12:48.551: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 26 14:12:48.562: INFO: Updating deployment test-rollover-deployment Aug 26 14:12:48.563: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 26 14:12:50.756: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 26 14:12:50.766: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 26 14:12:50.777: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:12:50.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047969, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:12:52.869: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:12:52.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047969, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:12:54.790: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:12:54.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047973, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:12:57.141: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:12:57.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047973, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:12:58.790: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:12:58.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047973, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:00.789: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:13:00.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047973, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:02.790: INFO: all replica sets need to contain the pod-template-hash label Aug 26 14:13:02.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047973, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734047966, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:04.921: INFO: Aug 26 14:13:04.921: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 26 14:13:04.939: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6722 /apis/apps/v1/namespaces/deployment-6722/deployments/test-rollover-deployment cdf08237-3125-4c11-a2e1-bf6d449bdfa6 3893607 2 2020-08-26 14:12:46 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8873058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 14:12:46 +0000 UTC,LastTransitionTime:2020-08-26 14:12:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-26 14:13:03 +0000 UTC,LastTransitionTime:2020-08-26 14:12:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 26 14:13:04.947: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6722 /apis/apps/v1/namespaces/deployment-6722/replicasets/test-rollover-deployment-574d6dfbff 6a46df87-adfb-4263-a23d-16fe490d12cd 3893595 2 2020-08-26 14:12:48 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cdf08237-3125-4c11-a2e1-bf6d449bdfa6 0x85ee177 0x85ee178}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x85ee1e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 26 14:13:04.947: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 26 14:13:04.948: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6722 /apis/apps/v1/namespaces/deployment-6722/replicasets/test-rollover-controller 49345a97-8305-4df4-80ed-4ea5171863fd 3893605 2 2020-08-26 14:12:35 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cdf08237-3125-4c11-a2e1-bf6d449bdfa6 0x85ee0a7 0x85ee0a8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x85ee108 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 26 14:13:04.949: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6722 /apis/apps/v1/namespaces/deployment-6722/replicasets/test-rollover-deployment-f6c94f66c 183253c4-8f84-48cb-ba77-174ac9c56f32 3893531 2 2020-08-26 14:12:46 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cdf08237-3125-4c11-a2e1-bf6d449bdfa6 0x85ee250 0x85ee251}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x85ee2c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 26 14:13:04.976: INFO: Pod "test-rollover-deployment-574d6dfbff-56cjx" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-56cjx test-rollover-deployment-574d6dfbff- deployment-6722 /api/v1/namespaces/deployment-6722/pods/test-rollover-deployment-574d6dfbff-56cjx 8a3f3173-46ed-41e4-8651-4c2faf25b90d 3893550 0 2020-08-26 14:12:49 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 6a46df87-adfb-4263-a23d-16fe490d12cd 0x88733c7 0x88733c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zqd6r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zqd6r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zqd6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:12:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:12:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:12:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:12:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.238,StartTime:2020-08-26 14:12:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 14:12:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b1814517d6acb8ed1ca09945c032c3364c26f0f3a2c6aa5979d146dbb74c7c04,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:13:04.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6722" for this suite. • [SLOW TEST:29.295 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":25,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:13:04.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-jzjg STEP: Creating a pod to test atomic-volume-subpath Aug 26 14:13:05.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jzjg" in namespace "subpath-5749" to be "success or failure" Aug 26 14:13:05.292: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 142.822173ms Aug 26 14:13:07.578: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428855801s Aug 26 14:13:09.723: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574290469s Aug 26 14:13:11.871: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 6.721885269s Aug 26 14:13:14.037: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 8.888132112s Aug 26 14:13:16.084: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 10.935403054s Aug 26 14:13:18.630: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 13.480853315s Aug 26 14:13:20.638: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 15.488494669s Aug 26 14:13:22.646: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 17.496924457s Aug 26 14:13:24.765: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 19.61553678s Aug 26 14:13:26.985: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 21.836113358s Aug 26 14:13:28.993: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Running", Reason="", readiness=true. Elapsed: 23.843721689s Aug 26 14:13:31.000: INFO: Pod "pod-subpath-test-downwardapi-jzjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.850640881s STEP: Saw pod success Aug 26 14:13:31.000: INFO: Pod "pod-subpath-test-downwardapi-jzjg" satisfied condition "success or failure" Aug 26 14:13:31.005: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-jzjg container test-container-subpath-downwardapi-jzjg: STEP: delete the pod Aug 26 14:13:31.273: INFO: Waiting for pod pod-subpath-test-downwardapi-jzjg to disappear Aug 26 14:13:31.444: INFO: Pod pod-subpath-test-downwardapi-jzjg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jzjg Aug 26 14:13:31.445: INFO: Deleting pod "pod-subpath-test-downwardapi-jzjg" in namespace "subpath-5749" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:13:31.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5749" for this suite. • [SLOW TEST:26.495 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":26,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:13:31.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 14:13:40.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 14:13:43.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:45.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:47.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:49.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:51.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:13:53.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048021, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048020, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 14:13:56.747: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:13:59.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1184" for this suite. STEP: Destroying namespace "webhook-1184-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:31.892 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":27,"skipped":396,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:14:03.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 26 14:14:04.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6" in namespace "downward-api-2624" to be "success or failure" Aug 26 14:14:05.119: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Pending", Reason="", readiness=false. Elapsed: 145.380255ms Aug 26 14:14:07.372: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398729829s Aug 26 14:14:09.511: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537723019s Aug 26 14:14:12.083: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.10961586s Aug 26 14:14:14.191: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.217189276s Aug 26 14:14:16.531: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Running", Reason="", readiness=true. Elapsed: 11.557314631s Aug 26 14:14:18.644: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.67018011s STEP: Saw pod success Aug 26 14:14:18.644: INFO: Pod "downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6" satisfied condition "success or failure" Aug 26 14:14:18.683: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6 container client-container: STEP: delete the pod Aug 26 14:14:18.719: INFO: Waiting for pod downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6 to disappear Aug 26 14:14:19.231: INFO: Pod downwardapi-volume-a886e615-da28-4136-b89c-6932e8144db6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:14:19.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2624" for this suite. • [SLOW TEST:15.871 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:14:19.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 26 14:14:20.859: INFO: Waiting up to 5m0s for pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9" in namespace "emptydir-6671" to be "success or failure" Aug 26 14:14:21.157: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 298.397578ms Aug 26 14:14:23.398: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538861205s Aug 26 14:14:25.560: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701121947s Aug 26 14:14:28.044: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.185600139s Aug 26 14:14:30.290: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.430773726s Aug 26 14:14:33.184: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.324852385s Aug 26 14:14:35.631: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Running", Reason="", readiness=true. Elapsed: 14.772174162s Aug 26 14:14:38.178: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.318717232s STEP: Saw pod success Aug 26 14:14:38.178: INFO: Pod "pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9" satisfied condition "success or failure" Aug 26 14:14:38.214: INFO: Trying to get logs from node jerma-worker pod pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9 container test-container: STEP: delete the pod Aug 26 14:14:39.481: INFO: Waiting for pod pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9 to disappear Aug 26 14:14:39.486: INFO: Pod pod-3e0f530b-cea9-4c88-90c9-07531ded3fc9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:14:39.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6671" for this suite. • [SLOW TEST:20.436 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":425,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:14:39.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 26 14:14:41.390: INFO: Waiting up to 5m0s for pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354" in namespace "downward-api-8104" to be "success or failure" Aug 26 14:14:41.467: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354": Phase="Pending", Reason="", readiness=false. Elapsed: 76.820379ms Aug 26 14:14:43.514: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124012388s Aug 26 14:14:45.572: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181393783s Aug 26 14:14:47.578: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354": Phase="Running", Reason="", readiness=true. Elapsed: 6.187905594s Aug 26 14:14:49.585: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.194118443s STEP: Saw pod success Aug 26 14:14:49.585: INFO: Pod "downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354" satisfied condition "success or failure" Aug 26 14:14:49.589: INFO: Trying to get logs from node jerma-worker pod downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354 container dapi-container: STEP: delete the pod Aug 26 14:14:49.758: INFO: Waiting for pod downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354 to disappear Aug 26 14:14:50.062: INFO: Pod downward-api-5b36dd63-5396-4aa3-8c3a-498657f02354 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:14:50.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8104" for this suite. • [SLOW TEST:10.383 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:14:50.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:14:50.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 26 14:14:51.805: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:14:51Z generation:1 name:name1 resourceVersion:3894113 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f048710-6f63-4fc5-9c4b-383e7b2fb6b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 26 14:15:01.922: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:15:01Z generation:1 name:name2 resourceVersion:3894146 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7490ec3f-e271-4f15-8f0e-11e87d3452fd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 26 14:15:11.931: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:14:51Z generation:2 name:name1 resourceVersion:3894178 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f048710-6f63-4fc5-9c4b-383e7b2fb6b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 26 14:15:22.106: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:15:01Z generation:2 name:name2 resourceVersion:3894222 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7490ec3f-e271-4f15-8f0e-11e87d3452fd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 26 14:15:32.116: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:14:51Z generation:2 name:name1 resourceVersion:3894275 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4f048710-6f63-4fc5-9c4b-383e7b2fb6b3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 26 14:15:42.573: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T14:15:01Z generation:2 name:name2 resourceVersion:3894323 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7490ec3f-e271-4f15-8f0e-11e87d3452fd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:15:53.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8885" for this suite. • [SLOW TEST:63.022 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":31,"skipped":479,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:15:53.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:15:53.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5188" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":32,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:15:53.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 26 14:15:53.914: INFO: Waiting up to 5m0s for pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7" in namespace "emptydir-8960" to be "success or failure" Aug 26 14:15:53.983: INFO: Pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 68.525216ms Aug 26 14:15:56.037: INFO: Pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12278437s Aug 26 14:15:58.202: INFO: Pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287132181s Aug 26 14:16:00.219: INFO: Pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.304364666s STEP: Saw pod success Aug 26 14:16:00.219: INFO: Pod "pod-b7ee6529-cf76-4a77-91c4-8227db790bc7" satisfied condition "success or failure" Aug 26 14:16:00.224: INFO: Trying to get logs from node jerma-worker pod pod-b7ee6529-cf76-4a77-91c4-8227db790bc7 container test-container: STEP: delete the pod Aug 26 14:16:00.260: INFO: Waiting for pod pod-b7ee6529-cf76-4a77-91c4-8227db790bc7 to disappear Aug 26 14:16:00.288: INFO: Pod pod-b7ee6529-cf76-4a77-91c4-8227db790bc7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:16:00.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8960" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":556,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:16:00.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 26 14:16:01.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c" in namespace "projected-489" to be "success or failure" Aug 26 14:16:01.267: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c": Phase="Pending", Reason="", readiness=false. Elapsed: 149.943227ms Aug 26 14:16:03.538: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.420911546s Aug 26 14:16:05.615: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497927988s Aug 26 14:16:07.842: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725851265s Aug 26 14:16:09.850: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.73344873s STEP: Saw pod success Aug 26 14:16:09.850: INFO: Pod "downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c" satisfied condition "success or failure" Aug 26 14:16:09.856: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c container client-container: STEP: delete the pod Aug 26 14:16:10.033: INFO: Waiting for pod downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c to disappear Aug 26 14:16:10.128: INFO: Pod downwardapi-volume-707803ec-5ef0-4266-9e72-3d98bca9e99c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:16:10.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-489" for this suite. • [SLOW TEST:9.835 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":567,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:16:10.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b66cae73-1137-4735-b33b-fc00c2ba0a6d STEP: Creating a pod to test consume configMaps Aug 26 14:16:10.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42" in namespace "configmap-2857" to be "success or failure" Aug 26 14:16:10.356: INFO: Pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42": Phase="Pending", Reason="", readiness=false. Elapsed: 22.553112ms Aug 26 14:16:13.249: INFO: Pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.916011895s Aug 26 14:16:15.427: INFO: Pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42": Phase="Pending", Reason="", readiness=false. Elapsed: 5.093899351s Aug 26 14:16:18.443: INFO: Pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10989106s STEP: Saw pod success Aug 26 14:16:18.443: INFO: Pod "pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42" satisfied condition "success or failure" Aug 26 14:16:18.490: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42 container configmap-volume-test: STEP: delete the pod Aug 26 14:16:19.146: INFO: Waiting for pod pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42 to disappear Aug 26 14:16:19.323: INFO: Pod pod-configmaps-e81385dc-dd9b-465f-8a3a-11f78c581a42 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:16:19.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2857" for this suite. • [SLOW TEST:9.369 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:16:19.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:16:19.637: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 26 14:16:19.647: INFO: Number of nodes with available pods: 0 Aug 26 14:16:19.647: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 26 14:16:20.260: INFO: Number of nodes with available pods: 0 Aug 26 14:16:20.260: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:21.351: INFO: Number of nodes with available pods: 0 Aug 26 14:16:21.351: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:22.268: INFO: Number of nodes with available pods: 0 Aug 26 14:16:22.268: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:23.295: INFO: Number of nodes with available pods: 0 Aug 26 14:16:23.295: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:24.484: INFO: Number of nodes with available pods: 0 Aug 26 14:16:24.484: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:25.291: INFO: Number of nodes with available pods: 0 Aug 26 14:16:25.291: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:26.272: INFO: Number of nodes with available pods: 0 Aug 26 14:16:26.272: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:27.291: INFO: Number of nodes with available pods: 1 Aug 26 14:16:27.291: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 26 14:16:27.687: INFO: Number of nodes with available pods: 1 Aug 26 14:16:27.688: INFO: Number of running nodes: 0, number of available pods: 1 Aug 26 14:16:28.925: INFO: Number of nodes with available pods: 0 Aug 26 14:16:28.925: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 26 14:16:28.974: INFO: Number of nodes with available pods: 0 Aug 26 14:16:28.974: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:30.352: INFO: Number of nodes with available pods: 0 Aug 26 14:16:30.352: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:31.385: INFO: Number of nodes with available pods: 0 Aug 26 14:16:31.385: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:32.058: INFO: Number of nodes with available pods: 0 Aug 26 14:16:32.058: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:32.981: INFO: Number of nodes with available pods: 0 Aug 26 14:16:32.981: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:33.980: INFO: Number of nodes with available pods: 0 Aug 26 14:16:33.980: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:34.979: INFO: Number of nodes with available pods: 0 Aug 26 14:16:34.979: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:35.980: INFO: Number of nodes with available pods: 0 Aug 26 14:16:35.980: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:36.979: INFO: Number of nodes with available pods: 0 Aug 26 14:16:36.979: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:37.978: INFO: Number of nodes with available pods: 0 Aug 26 14:16:37.978: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:38.979: INFO: Number of nodes with available pods: 0 Aug 26 14:16:38.980: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:40.076: INFO: Number of nodes with available pods: 0 Aug 26 14:16:40.077: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:40.991: INFO: Number of nodes with available pods: 0 Aug 26 14:16:40.991: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:42.460: INFO: Number of nodes with available pods: 0 Aug 26 14:16:42.460: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:42.982: INFO: Number of nodes with available pods: 0 Aug 26 14:16:42.983: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:43.986: INFO: Number of nodes with available pods: 0 Aug 26 14:16:43.986: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:45.081: INFO: Number of nodes with available pods: 0 Aug 26 14:16:45.081: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:45.981: INFO: Number of nodes with available pods: 0 Aug 26 14:16:45.981: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:46.996: INFO: Number of nodes with available pods: 0 Aug 26 14:16:46.996: INFO: Node jerma-worker2 is running more than one daemon pod Aug 26 14:16:47.999: INFO: Number of nodes with available pods: 1 Aug 26 14:16:47.999: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5780, will wait for the garbage collector to delete the pods Aug 26 14:16:48.090: INFO: Deleting DaemonSet.extensions daemon-set took: 8.947408ms Aug 26 14:16:48.391: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.907892ms Aug 26 14:16:56.097: INFO: Number of nodes with available pods: 0 Aug 26 14:16:56.097: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 14:16:56.101: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5780/daemonsets","resourceVersion":"3894780"},"items":null} Aug 26 14:16:56.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5780/pods","resourceVersion":"3894780"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:16:56.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5780" for this suite. • [SLOW TEST:36.689 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":36,"skipped":603,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:16:56.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 26 14:16:56.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6924' Aug 26 14:17:01.469: INFO: stderr: "" Aug 26 14:17:01.469: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 26 14:17:01.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6924' Aug 26 14:17:11.603: INFO: stderr: "" Aug 26 14:17:11.603: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:17:11.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6924" for this suite. • [SLOW TEST:15.412 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":37,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:17:11.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 26 14:17:11.720: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 26 14:17:11.744: INFO: Waiting for terminating namespaces to be deleted... Aug 26 14:17:11.750: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 26 14:17:11.764: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.765: INFO: Container app ready: true, restart count 0 Aug 26 14:17:11.765: INFO: rally-ff774cb2-8mqomq37-0 from c-rally-ff774cb2-r8y59gak started at 2020-08-26 14:17:09 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.765: INFO: Container rally-ff774cb2-8mqomq37 ready: false, restart count 0 Aug 26 14:17:11.765: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.765: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 14:17:11.765: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.765: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 14:17:11.765: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 26 14:17:11.794: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.795: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 14:17:11.795: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.795: INFO: Container app ready: true, restart count 0 Aug 26 14:17:11.795: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.795: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 14:17:11.795: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded) Aug 26 14:17:11.795: INFO: Container httpd ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-27d963f6-6a2a-4115-92e7-73586a78193c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-27d963f6-6a2a-4115-92e7-73586a78193c off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-27d963f6-6a2a-4115-92e7-73586a78193c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:22:25.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1902" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:314.389 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":38,"skipped":649,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:22:26.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:22:26.903: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1701 I0826 14:22:26.969403 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1701, replica count: 1 I0826 14:22:28.022233 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:29.023633 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:30.024564 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:31.025324 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:32.026052 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:33.026693 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:34.027308 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 14:22:35.028657 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 26 14:22:35.414: INFO: Created: latency-svc-nddp5 Aug 26 14:22:35.608: INFO: Got endpoints: latency-svc-nddp5 [475.891732ms] Aug 26 14:22:35.681: INFO: Created: latency-svc-xsbkq Aug 26 14:22:36.381: INFO: Got endpoints: latency-svc-xsbkq [772.314897ms] Aug 26 14:22:36.806: INFO: Created: latency-svc-xkvrl Aug 26 14:22:36.859: INFO: Got endpoints: latency-svc-xkvrl [1.250252754s] Aug 26 14:22:37.036: INFO: Created: latency-svc-mj8pb Aug 26 14:22:37.370: INFO: Got endpoints: latency-svc-mj8pb [1.760884853s] Aug 26 14:22:37.930: INFO: Created: latency-svc-jhlgg Aug 26 14:22:38.286: INFO: Got endpoints: latency-svc-jhlgg [2.677064279s] Aug 26 14:22:38.844: INFO: Created: latency-svc-cbcxx Aug 26 14:22:38.882: INFO: Got endpoints: latency-svc-cbcxx [3.272840526s] Aug 26 14:22:40.155: INFO: Created: latency-svc-4slzl Aug 26 14:22:40.159: INFO: Got endpoints: latency-svc-4slzl [4.550058003s] Aug 26 14:22:40.311: INFO: Created: latency-svc-lnp5x Aug 26 14:22:40.373: INFO: Got endpoints: latency-svc-lnp5x [4.763302744s] Aug 26 14:22:40.530: INFO: Created: latency-svc-p8sxb Aug 26 14:22:40.561: INFO: Got endpoints: latency-svc-p8sxb [4.952264664s] Aug 26 14:22:40.709: INFO: Created: latency-svc-qcpd8 Aug 26 14:22:40.741: INFO: Got endpoints: latency-svc-qcpd8 [5.130929397s] Aug 26 14:22:40.885: INFO: Created: latency-svc-j7cnf Aug 26 14:22:40.949: INFO: Got endpoints: latency-svc-j7cnf [5.33951578s] Aug 26 14:22:41.305: INFO: Created: latency-svc-wnp4l Aug 26 14:22:41.449: INFO: Got endpoints: latency-svc-wnp4l [5.839945025s] Aug 26 14:22:41.842: INFO: Created: latency-svc-jgwd7 Aug 26 14:22:41.921: INFO: Got endpoints: latency-svc-jgwd7 [6.310645057s] Aug 26 14:22:42.529: INFO: Created: latency-svc-5qdpq Aug 26 14:22:42.908: INFO: Got endpoints: latency-svc-5qdpq [7.298275152s] Aug 26 14:22:43.542: INFO: Created: latency-svc-l4t84 Aug 26 14:22:44.369: INFO: Got endpoints: latency-svc-l4t84 [8.758683154s] Aug 26 14:22:44.653: INFO: Created: latency-svc-txmz6 Aug 26 14:22:44.963: INFO: Got endpoints: latency-svc-txmz6 [9.353808431s] Aug 26 14:22:45.421: INFO: Created: latency-svc-gcm6z Aug 26 14:22:45.747: INFO: Got endpoints: latency-svc-gcm6z [9.364974171s] Aug 26 14:22:45.751: INFO: Created: latency-svc-tjrd7 Aug 26 14:22:45.795: INFO: Got endpoints: latency-svc-tjrd7 [8.935154204s] Aug 26 14:22:46.131: INFO: Created: latency-svc-754bt Aug 26 14:22:46.358: INFO: Got endpoints: latency-svc-754bt [8.987641351s] Aug 26 14:22:46.717: INFO: Created: latency-svc-72mr6 Aug 26 14:22:46.723: INFO: Got endpoints: latency-svc-72mr6 [8.436309875s] Aug 26 14:22:47.243: INFO: Created: latency-svc-lzb4k Aug 26 14:22:47.252: INFO: Got endpoints: latency-svc-lzb4k [8.370004444s] Aug 26 14:22:47.576: INFO: Created: latency-svc-l25qx Aug 26 14:22:47.618: INFO: Got endpoints: latency-svc-l25qx [7.458325537s] Aug 26 14:22:49.404: INFO: Created: latency-svc-rp6ws Aug 26 14:22:49.639: INFO: Got endpoints: latency-svc-rp6ws [9.265318685s] Aug 26 14:22:49.650: INFO: Created: latency-svc-bfsl8 Aug 26 14:22:49.794: INFO: Got endpoints: latency-svc-bfsl8 [9.231829151s] Aug 26 14:22:49.838: INFO: Created: latency-svc-7g4q7 Aug 26 14:22:49.962: INFO: Got endpoints: latency-svc-7g4q7 [9.221175812s] Aug 26 14:22:50.147: INFO: Created: latency-svc-mb48d Aug 26 14:22:50.195: INFO: Got endpoints: latency-svc-mb48d [9.246054223s] Aug 26 14:22:50.244: INFO: Created: latency-svc-j2rm7 Aug 26 14:22:50.345: INFO: Got endpoints: latency-svc-j2rm7 [8.895648775s] Aug 26 14:22:50.412: INFO: Created: latency-svc-cb9fz Aug 26 14:22:50.525: INFO: Got endpoints: latency-svc-cb9fz [8.603927399s] Aug 26 14:22:50.562: INFO: Created: latency-svc-7bhqm Aug 26 14:22:50.581: INFO: Got endpoints: latency-svc-7bhqm [7.673249983s] Aug 26 14:22:51.057: INFO: Created: latency-svc-n6sxb Aug 26 14:22:51.071: INFO: Got endpoints: latency-svc-n6sxb [6.701521091s] Aug 26 14:22:51.286: INFO: Created: latency-svc-k9vxh Aug 26 14:22:51.291: INFO: Got endpoints: latency-svc-k9vxh [6.327864529s] Aug 26 14:22:51.622: INFO: Created: latency-svc-w7296 Aug 26 14:22:51.706: INFO: Got endpoints: latency-svc-w7296 [5.958402563s] Aug 26 14:22:51.859: INFO: Created: latency-svc-5dk22 Aug 26 14:22:51.896: INFO: Got endpoints: latency-svc-5dk22 [6.100492665s] Aug 26 14:22:51.944: INFO: Created: latency-svc-7x6tb Aug 26 14:22:52.009: INFO: Got endpoints: latency-svc-7x6tb [5.65073593s] Aug 26 14:22:52.052: INFO: Created: latency-svc-hl8bk Aug 26 14:22:52.071: INFO: Got endpoints: latency-svc-hl8bk [5.348215067s] Aug 26 14:22:52.100: INFO: Created: latency-svc-mlgkz Aug 26 14:22:52.159: INFO: Got endpoints: latency-svc-mlgkz [4.906457728s] Aug 26 14:22:52.208: INFO: Created: latency-svc-dj5xn Aug 26 14:22:52.241: INFO: Got endpoints: latency-svc-dj5xn [4.622261845s] Aug 26 14:22:52.329: INFO: Created: latency-svc-4vmm7 Aug 26 14:22:52.348: INFO: Got endpoints: latency-svc-4vmm7 [2.709628868s] Aug 26 14:22:52.488: INFO: Created: latency-svc-p29h2 Aug 26 14:22:52.491: INFO: Got endpoints: latency-svc-p29h2 [2.696785733s] Aug 26 14:22:52.561: INFO: Created: latency-svc-c2z4s Aug 26 14:22:52.581: INFO: Got endpoints: latency-svc-c2z4s [2.619185296s] Aug 26 14:22:52.633: INFO: Created: latency-svc-hbk4w Aug 26 14:22:52.647: INFO: Got endpoints: latency-svc-hbk4w [2.45111737s] Aug 26 14:22:52.684: INFO: Created: latency-svc-jqrmc Aug 26 14:22:52.695: INFO: Got endpoints: latency-svc-jqrmc [2.349407043s] Aug 26 14:22:52.718: INFO: Created: latency-svc-f9t6c Aug 26 14:22:52.731: INFO: Got endpoints: latency-svc-f9t6c [2.206029852s] Aug 26 14:22:52.787: INFO: Created: latency-svc-9j7gz Aug 26 14:22:52.798: INFO: Got endpoints: latency-svc-9j7gz [2.216356768s] Aug 26 14:22:52.832: INFO: Created: latency-svc-qjt26 Aug 26 14:22:52.852: INFO: Got endpoints: latency-svc-qjt26 [1.780668018s] Aug 26 14:22:52.874: INFO: Created: latency-svc-fpbm5 Aug 26 14:22:52.973: INFO: Got endpoints: latency-svc-fpbm5 [1.681543921s] Aug 26 14:22:52.973: INFO: Created: latency-svc-zt44x Aug 26 14:22:52.990: INFO: Got endpoints: latency-svc-zt44x [1.283658821s] Aug 26 14:22:53.049: INFO: Created: latency-svc-xdh7q Aug 26 14:22:53.062: INFO: Got endpoints: latency-svc-xdh7q [1.166339426s] Aug 26 14:22:53.141: INFO: Created: latency-svc-zw2cf Aug 26 14:22:53.176: INFO: Got endpoints: latency-svc-zw2cf [1.167036377s] Aug 26 14:22:53.392: INFO: Created: latency-svc-ms2vm Aug 26 14:22:53.420: INFO: Got endpoints: latency-svc-ms2vm [1.348625367s] Aug 26 14:22:53.485: INFO: Created: latency-svc-lv4fs Aug 26 14:22:53.604: INFO: Got endpoints: latency-svc-lv4fs [1.444261771s] Aug 26 14:22:53.609: INFO: Created: latency-svc-97627 Aug 26 14:22:53.658: INFO: Got endpoints: latency-svc-97627 [1.417182017s] Aug 26 14:22:53.770: INFO: Created: latency-svc-47nkv Aug 26 14:22:53.808: INFO: Got endpoints: latency-svc-47nkv [1.459817357s] Aug 26 14:22:53.979: INFO: Created: latency-svc-z8kfp Aug 26 14:22:54.019: INFO: Got endpoints: latency-svc-z8kfp [1.527659009s] Aug 26 14:22:54.070: INFO: Created: latency-svc-cgxts Aug 26 14:22:54.178: INFO: Got endpoints: latency-svc-cgxts [1.595867804s] Aug 26 14:22:54.528: INFO: Created: latency-svc-g86rg Aug 26 14:22:54.613: INFO: Got endpoints: latency-svc-g86rg [1.965590727s] Aug 26 14:22:55.055: INFO: Created: latency-svc-5gcvq Aug 26 14:22:55.460: INFO: Got endpoints: latency-svc-5gcvq [2.764517124s] Aug 26 14:22:56.045: INFO: Created: latency-svc-mbmfw Aug 26 14:22:56.106: INFO: Got endpoints: latency-svc-mbmfw [3.374644331s] Aug 26 14:22:56.291: INFO: Created: latency-svc-8fvlg Aug 26 14:22:56.422: INFO: Got endpoints: latency-svc-8fvlg [3.623868866s] Aug 26 14:22:56.440: INFO: Created: latency-svc-5l7xz Aug 26 14:22:56.506: INFO: Got endpoints: latency-svc-5l7xz [3.65370214s] Aug 26 14:22:56.608: INFO: Created: latency-svc-n2zv5 Aug 26 14:22:56.763: INFO: Got endpoints: latency-svc-n2zv5 [3.790570633s] Aug 26 14:22:56.764: INFO: Created: latency-svc-pmccn Aug 26 14:22:56.836: INFO: Created: latency-svc-9txgh Aug 26 14:22:56.837: INFO: Got endpoints: latency-svc-pmccn [3.84670671s] Aug 26 14:22:56.938: INFO: Got endpoints: latency-svc-9txgh [3.875975662s] Aug 26 14:22:57.106: INFO: Created: latency-svc-bxsjq Aug 26 14:22:57.167: INFO: Created: latency-svc-t859n Aug 26 14:22:57.167: INFO: Got endpoints: latency-svc-bxsjq [3.990757333s] Aug 26 14:22:57.437: INFO: Got endpoints: latency-svc-t859n [4.016255368s] Aug 26 14:22:57.827: INFO: Created: latency-svc-xjrxb Aug 26 14:22:58.107: INFO: Got endpoints: latency-svc-xjrxb [4.502930683s] Aug 26 14:22:58.109: INFO: Created: latency-svc-mnsjj Aug 26 14:22:58.682: INFO: Got endpoints: latency-svc-mnsjj [5.023744526s] Aug 26 14:22:58.685: INFO: Created: latency-svc-mmjcr Aug 26 14:22:58.700: INFO: Got endpoints: latency-svc-mmjcr [4.891296458s] Aug 26 14:22:59.575: INFO: Created: latency-svc-v2sn4 Aug 26 14:22:59.611: INFO: Got endpoints: latency-svc-v2sn4 [5.591947055s] Aug 26 14:23:00.155: INFO: Created: latency-svc-vd765 Aug 26 14:23:00.245: INFO: Got endpoints: latency-svc-vd765 [6.067046373s] Aug 26 14:23:00.592: INFO: Created: latency-svc-crnfh Aug 26 14:23:00.905: INFO: Got endpoints: latency-svc-crnfh [6.292300534s] Aug 26 14:23:01.179: INFO: Created: latency-svc-k7bvk Aug 26 14:23:01.234: INFO: Got endpoints: latency-svc-k7bvk [5.77389007s] Aug 26 14:23:01.352: INFO: Created: latency-svc-qkvv6 Aug 26 14:23:01.384: INFO: Got endpoints: latency-svc-qkvv6 [5.277665319s] Aug 26 14:23:01.669: INFO: Created: latency-svc-k5dt5 Aug 26 14:23:01.922: INFO: Got endpoints: latency-svc-k5dt5 [5.499611617s] Aug 26 14:23:02.288: INFO: Created: latency-svc-pbx84 Aug 26 14:23:02.830: INFO: Got endpoints: latency-svc-pbx84 [6.323637939s] Aug 26 14:23:03.053: INFO: Created: latency-svc-z5lxr Aug 26 14:23:03.056: INFO: Got endpoints: latency-svc-z5lxr [6.292233253s] Aug 26 14:23:03.218: INFO: Created: latency-svc-8gbn8 Aug 26 14:23:03.290: INFO: Got endpoints: latency-svc-8gbn8 [6.452842706s] Aug 26 14:23:03.402: INFO: Created: latency-svc-kfd9c Aug 26 14:23:03.416: INFO: Got endpoints: latency-svc-kfd9c [6.477356899s] Aug 26 14:23:03.472: INFO: Created: latency-svc-7r6f4 Aug 26 14:23:03.481: INFO: Got endpoints: latency-svc-7r6f4 [6.313841328s] Aug 26 14:23:03.565: INFO: Created: latency-svc-rql8d Aug 26 14:23:03.595: INFO: Got endpoints: latency-svc-rql8d [6.158373003s] Aug 26 14:23:03.632: INFO: Created: latency-svc-wphl2 Aug 26 14:23:03.651: INFO: Got endpoints: latency-svc-wphl2 [5.543961707s] Aug 26 14:23:03.709: INFO: Created: latency-svc-bnl5r Aug 26 14:23:03.752: INFO: Got endpoints: latency-svc-bnl5r [5.069623596s] Aug 26 14:23:03.753: INFO: Created: latency-svc-xbnkj Aug 26 14:23:03.777: INFO: Got endpoints: latency-svc-xbnkj [5.076739174s] Aug 26 14:23:03.866: INFO: Created: latency-svc-tv7z4 Aug 26 14:23:03.897: INFO: Got endpoints: latency-svc-tv7z4 [4.286427795s] Aug 26 14:23:03.919: INFO: Created: latency-svc-w6zvf Aug 26 14:23:03.940: INFO: Got endpoints: latency-svc-w6zvf [3.694619423s] Aug 26 14:23:04.046: INFO: Created: latency-svc-vkckj Aug 26 14:23:04.064: INFO: Got endpoints: latency-svc-vkckj [3.158634843s] Aug 26 14:23:04.160: INFO: Created: latency-svc-q2qt4 Aug 26 14:23:04.537: INFO: Got endpoints: latency-svc-q2qt4 [3.302318701s] Aug 26 14:23:04.733: INFO: Created: latency-svc-kxmds Aug 26 14:23:04.785: INFO: Got endpoints: latency-svc-kxmds [3.400764184s] Aug 26 14:23:04.789: INFO: Created: latency-svc-x26tx Aug 26 14:23:04.901: INFO: Got endpoints: latency-svc-x26tx [2.979114159s] Aug 26 14:23:04.929: INFO: Created: latency-svc-9tv5n Aug 26 14:23:05.001: INFO: Got endpoints: latency-svc-9tv5n [2.17076259s] Aug 26 14:23:05.106: INFO: Created: latency-svc-nq7f7 Aug 26 14:23:05.138: INFO: Got endpoints: latency-svc-nq7f7 [2.082223629s] Aug 26 14:23:05.180: INFO: Created: latency-svc-fsbq8 Aug 26 14:23:05.376: INFO: Got endpoints: latency-svc-fsbq8 [2.086457337s] Aug 26 14:23:05.549: INFO: Created: latency-svc-v55pw Aug 26 14:23:05.580: INFO: Got endpoints: latency-svc-v55pw [2.163539419s] Aug 26 14:23:05.782: INFO: Created: latency-svc-szn2p Aug 26 14:23:05.786: INFO: Got endpoints: latency-svc-szn2p [2.304681122s] Aug 26 14:23:05.987: INFO: Created: latency-svc-f9glf Aug 26 14:23:06.011: INFO: Got endpoints: latency-svc-f9glf [2.415906704s] Aug 26 14:23:06.045: INFO: Created: latency-svc-fbn56 Aug 26 14:23:06.197: INFO: Got endpoints: latency-svc-fbn56 [2.545486631s] Aug 26 14:23:06.253: INFO: Created: latency-svc-7rgwn Aug 26 14:23:06.271: INFO: Got endpoints: latency-svc-7rgwn [2.518625071s] Aug 26 14:23:06.384: INFO: Created: latency-svc-qpcwf Aug 26 14:23:06.390: INFO: Got endpoints: latency-svc-qpcwf [2.612329627s] Aug 26 14:23:06.442: INFO: Created: latency-svc-6llqv Aug 26 14:23:06.451: INFO: Got endpoints: latency-svc-6llqv [2.553025838s] Aug 26 14:23:06.520: INFO: Created: latency-svc-sdz7n Aug 26 14:23:06.556: INFO: Got endpoints: latency-svc-sdz7n [2.615705839s] Aug 26 14:23:06.689: INFO: Created: latency-svc-8znsf Aug 26 14:23:06.727: INFO: Got endpoints: latency-svc-8znsf [2.662684218s] Aug 26 14:23:09.004: INFO: Created: latency-svc-2fxd2 Aug 26 14:23:09.075: INFO: Got endpoints: latency-svc-2fxd2 [4.53824257s] Aug 26 14:23:09.770: INFO: Created: latency-svc-756pr Aug 26 14:23:09.775: INFO: Got endpoints: latency-svc-756pr [4.989887223s] Aug 26 14:23:10.730: INFO: Created: latency-svc-mftsp Aug 26 14:23:10.738: INFO: Got endpoints: latency-svc-mftsp [5.837227004s] Aug 26 14:23:11.318: INFO: Created: latency-svc-jh27g Aug 26 14:23:11.364: INFO: Got endpoints: latency-svc-jh27g [6.363397982s] Aug 26 14:23:11.728: INFO: Created: latency-svc-qlpc6 Aug 26 14:23:11.743: INFO: Got endpoints: latency-svc-qlpc6 [6.603978997s] Aug 26 14:23:11.952: INFO: Created: latency-svc-q62xk Aug 26 14:23:11.993: INFO: Got endpoints: latency-svc-q62xk [6.616684129s] Aug 26 14:23:12.354: INFO: Created: latency-svc-2qq22 Aug 26 14:23:12.355: INFO: Got endpoints: latency-svc-2qq22 [6.774707478s] Aug 26 14:23:13.273: INFO: Created: latency-svc-4bw9k Aug 26 14:23:13.277: INFO: Got endpoints: latency-svc-4bw9k [7.490961246s] Aug 26 14:23:13.731: INFO: Created: latency-svc-tnrp9 Aug 26 14:23:14.149: INFO: Got endpoints: latency-svc-tnrp9 [8.137708797s] Aug 26 14:23:14.570: INFO: Created: latency-svc-q5c6b Aug 26 14:23:14.574: INFO: Got endpoints: latency-svc-q5c6b [8.376979568s] Aug 26 14:23:14.920: INFO: Created: latency-svc-zmrtm Aug 26 14:23:15.293: INFO: Got endpoints: latency-svc-zmrtm [9.022128524s] Aug 26 14:23:15.567: INFO: Created: latency-svc-br2mz Aug 26 14:23:16.004: INFO: Got endpoints: latency-svc-br2mz [9.613788765s] Aug 26 14:23:16.271: INFO: Created: latency-svc-jphzs Aug 26 14:23:16.351: INFO: Got endpoints: latency-svc-jphzs [9.899897629s] Aug 26 14:23:16.606: INFO: Created: latency-svc-j6mw6 Aug 26 14:23:16.645: INFO: Got endpoints: latency-svc-j6mw6 [10.089198643s] Aug 26 14:23:16.812: INFO: Created: latency-svc-hfmzx Aug 26 14:23:16.861: INFO: Got endpoints: latency-svc-hfmzx [10.133488455s] Aug 26 14:23:17.310: INFO: Created: latency-svc-np74s Aug 26 14:23:17.602: INFO: Got endpoints: latency-svc-np74s [8.526810075s] Aug 26 14:23:17.651: INFO: Created: latency-svc-dl5gc Aug 26 14:23:17.895: INFO: Got endpoints: latency-svc-dl5gc [8.119835242s] Aug 26 14:23:18.202: INFO: Created: latency-svc-whdjd Aug 26 14:23:18.392: INFO: Got endpoints: latency-svc-whdjd [7.653166652s] Aug 26 14:23:18.456: INFO: Created: latency-svc-hx82b Aug 26 14:23:18.904: INFO: Got endpoints: latency-svc-hx82b [7.539021103s] Aug 26 14:23:19.153: INFO: Created: latency-svc-dcnfs Aug 26 14:23:19.328: INFO: Got endpoints: latency-svc-dcnfs [7.585096638s] Aug 26 14:23:19.331: INFO: Created: latency-svc-94p8d Aug 26 14:23:19.383: INFO: Got endpoints: latency-svc-94p8d [7.389199943s] Aug 26 14:23:19.500: INFO: Created: latency-svc-s57kw Aug 26 14:23:19.926: INFO: Got endpoints: latency-svc-s57kw [7.571172244s] Aug 26 14:23:20.147: INFO: Created: latency-svc-8gfc9 Aug 26 14:23:20.206: INFO: Got endpoints: latency-svc-8gfc9 [6.928521662s] Aug 26 14:23:20.415: INFO: Created: latency-svc-zdzmw Aug 26 14:23:20.554: INFO: Got endpoints: latency-svc-zdzmw [6.404453653s] Aug 26 14:23:20.599: INFO: Created: latency-svc-rjg4v Aug 26 14:23:21.454: INFO: Got endpoints: latency-svc-rjg4v [6.879596105s] Aug 26 14:23:21.903: INFO: Created: latency-svc-wzh8j Aug 26 14:23:22.172: INFO: Got endpoints: latency-svc-wzh8j [6.878400065s] Aug 26 14:23:22.872: INFO: Created: latency-svc-6w4zl Aug 26 14:23:23.335: INFO: Got endpoints: latency-svc-6w4zl [7.331028634s] Aug 26 14:23:23.363: INFO: Created: latency-svc-mkjfx Aug 26 14:23:23.425: INFO: Got endpoints: latency-svc-mkjfx [7.073979848s] Aug 26 14:23:24.177: INFO: Created: latency-svc-lh6rh Aug 26 14:23:24.447: INFO: Got endpoints: latency-svc-lh6rh [7.80132032s] Aug 26 14:23:24.449: INFO: Created: latency-svc-5hgf2 Aug 26 14:23:24.711: INFO: Got endpoints: latency-svc-5hgf2 [7.849587583s] Aug 26 14:23:24.987: INFO: Created: latency-svc-jsctv Aug 26 14:23:25.097: INFO: Got endpoints: latency-svc-jsctv [7.494691327s] Aug 26 14:23:25.179: INFO: Created: latency-svc-x4v75 Aug 26 14:23:25.192: INFO: Got endpoints: latency-svc-x4v75 [7.296529785s] Aug 26 14:23:25.279: INFO: Created: latency-svc-hd4pw Aug 26 14:23:25.307: INFO: Got endpoints: latency-svc-hd4pw [6.915005774s] Aug 26 14:23:25.446: INFO: Created: latency-svc-k5s6r Aug 26 14:23:25.450: INFO: Got endpoints: latency-svc-k5s6r [6.546459048s] Aug 26 14:23:25.882: INFO: Created: latency-svc-xbvnf Aug 26 14:23:26.286: INFO: Got endpoints: latency-svc-xbvnf [6.957220497s] Aug 26 14:23:26.540: INFO: Created: latency-svc-7tct8 Aug 26 14:23:26.566: INFO: Got endpoints: latency-svc-7tct8 [7.183153679s] Aug 26 14:23:26.846: INFO: Created: latency-svc-mvqx5 Aug 26 14:23:26.878: INFO: Got endpoints: latency-svc-mvqx5 [6.951162111s] Aug 26 14:23:27.172: INFO: Created: latency-svc-77htg Aug 26 14:23:27.675: INFO: Got endpoints: latency-svc-77htg [7.468382506s] Aug 26 14:23:27.806: INFO: Created: latency-svc-mwb7w Aug 26 14:23:27.825: INFO: Got endpoints: latency-svc-mwb7w [7.270774296s] Aug 26 14:23:28.220: INFO: Created: latency-svc-m6kww Aug 26 14:23:28.246: INFO: Got endpoints: latency-svc-m6kww [6.791368703s] Aug 26 14:23:28.407: INFO: Created: latency-svc-g7cms Aug 26 14:23:28.419: INFO: Got endpoints: latency-svc-g7cms [6.246253352s] Aug 26 14:23:28.478: INFO: Created: latency-svc-cmsql Aug 26 14:23:28.554: INFO: Got endpoints: latency-svc-cmsql [5.219191677s] Aug 26 14:23:28.586: INFO: Created: latency-svc-5kjr5 Aug 26 14:23:28.606: INFO: Got endpoints: latency-svc-5kjr5 [5.180825653s] Aug 26 14:23:28.646: INFO: Created: latency-svc-trmnd Aug 26 14:23:28.753: INFO: Got endpoints: latency-svc-trmnd [4.303743488s] Aug 26 14:23:28.763: INFO: Created: latency-svc-4kdn8 Aug 26 14:23:28.799: INFO: Got endpoints: latency-svc-4kdn8 [4.087556706s] Aug 26 14:23:28.904: INFO: Created: latency-svc-2c7sd Aug 26 14:23:28.954: INFO: Got endpoints: latency-svc-2c7sd [3.856643489s] Aug 26 14:23:29.793: INFO: Created: latency-svc-g9c87 Aug 26 14:23:29.831: INFO: Got endpoints: latency-svc-g9c87 [4.639310321s] Aug 26 14:23:30.483: INFO: Created: latency-svc-kcvrn Aug 26 14:23:30.487: INFO: Got endpoints: latency-svc-kcvrn [5.179717952s] Aug 26 14:23:30.909: INFO: Created: latency-svc-2966s Aug 26 14:23:31.070: INFO: Got endpoints: latency-svc-2966s [5.619109886s] Aug 26 14:23:31.676: INFO: Created: latency-svc-7n88h Aug 26 14:23:32.957: INFO: Got endpoints: latency-svc-7n88h [6.671446142s] Aug 26 14:23:32.978: INFO: Created: latency-svc-s8sdl Aug 26 14:23:33.358: INFO: Got endpoints: latency-svc-s8sdl [6.791370938s] Aug 26 14:23:34.161: INFO: Created: latency-svc-7lfbz Aug 26 14:23:34.538: INFO: Got endpoints: latency-svc-7lfbz [7.660078713s] Aug 26 14:23:34.565: INFO: Created: latency-svc-zl4bg Aug 26 14:23:35.131: INFO: Got endpoints: latency-svc-zl4bg [7.455697138s] Aug 26 14:23:35.375: INFO: Created: latency-svc-4rws4 Aug 26 14:23:35.396: INFO: Got endpoints: latency-svc-4rws4 [7.570482477s] Aug 26 14:23:36.155: INFO: Created: latency-svc-6dc2f Aug 26 14:23:36.158: INFO: Got endpoints: latency-svc-6dc2f [7.912157155s] Aug 26 14:23:37.235: INFO: Created: latency-svc-s9w9j Aug 26 14:23:37.799: INFO: Got endpoints: latency-svc-s9w9j [9.379881179s] Aug 26 14:23:37.812: INFO: Created: latency-svc-mnpjm Aug 26 14:23:37.882: INFO: Got endpoints: latency-svc-mnpjm [9.327218776s] Aug 26 14:23:38.425: INFO: Created: latency-svc-8s5rc Aug 26 14:23:38.908: INFO: Got endpoints: latency-svc-8s5rc [10.301944471s] Aug 26 14:23:39.301: INFO: Created: latency-svc-nmlz4 Aug 26 14:23:39.326: INFO: Got endpoints: latency-svc-nmlz4 [10.573308783s] Aug 26 14:23:39.697: INFO: Created: latency-svc-nr7f2 Aug 26 14:23:40.250: INFO: Got endpoints: latency-svc-nr7f2 [11.45160648s] Aug 26 14:23:40.489: INFO: Created: latency-svc-ckhnk Aug 26 14:23:40.562: INFO: Got endpoints: latency-svc-ckhnk [11.607910489s] Aug 26 14:23:40.713: INFO: Created: latency-svc-zdqt7 Aug 26 14:23:41.238: INFO: Got endpoints: latency-svc-zdqt7 [11.406600969s] Aug 26 14:23:41.471: INFO: Created: latency-svc-4k66p Aug 26 14:23:41.791: INFO: Got endpoints: latency-svc-4k66p [11.303520175s] Aug 26 14:23:42.106: INFO: Created: latency-svc-qt2qf Aug 26 14:23:42.163: INFO: Got endpoints: latency-svc-qt2qf [11.092693008s] Aug 26 14:23:42.442: INFO: Created: latency-svc-fzxvj Aug 26 14:23:42.479: INFO: Got endpoints: latency-svc-fzxvj [9.521388569s] Aug 26 14:23:42.660: INFO: Created: latency-svc-gd7rp Aug 26 14:23:42.873: INFO: Got endpoints: latency-svc-gd7rp [9.515002285s] Aug 26 14:23:42.959: INFO: Created: latency-svc-8hr7n Aug 26 14:23:43.221: INFO: Got endpoints: latency-svc-8hr7n [8.682522527s] Aug 26 14:23:43.248: INFO: Created: latency-svc-nr887 Aug 26 14:23:43.283: INFO: Got endpoints: latency-svc-nr887 [8.151857388s] Aug 26 14:23:43.622: INFO: Created: latency-svc-wxzgx Aug 26 14:23:44.525: INFO: Got endpoints: latency-svc-wxzgx [9.129240701s] Aug 26 14:23:44.622: INFO: Created: latency-svc-t5zbg Aug 26 14:23:45.103: INFO: Got endpoints: latency-svc-t5zbg [8.944460386s] Aug 26 14:23:45.985: INFO: Created: latency-svc-r4bhc Aug 26 14:23:46.202: INFO: Got endpoints: latency-svc-r4bhc [8.40332276s] Aug 26 14:23:46.277: INFO: Created: latency-svc-fq4sm Aug 26 14:23:46.617: INFO: Got endpoints: latency-svc-fq4sm [8.735023999s] Aug 26 14:23:46.927: INFO: Created: latency-svc-bkkbm Aug 26 14:23:47.004: INFO: Got endpoints: latency-svc-bkkbm [8.095791229s] Aug 26 14:23:47.269: INFO: Created: latency-svc-msj6j Aug 26 14:23:47.495: INFO: Got endpoints: latency-svc-msj6j [8.168167633s] Aug 26 14:23:47.807: INFO: Created: latency-svc-66sx4 Aug 26 14:23:48.245: INFO: Got endpoints: latency-svc-66sx4 [7.99402004s] Aug 26 14:23:48.473: INFO: Created: latency-svc-k4r78 Aug 26 14:23:48.556: INFO: Got endpoints: latency-svc-k4r78 [7.993639761s] Aug 26 14:23:49.334: INFO: Created: latency-svc-qmlsw Aug 26 14:23:49.668: INFO: Created: latency-svc-gqn5s Aug 26 14:23:49.669: INFO: Got endpoints: latency-svc-qmlsw [8.430233226s] Aug 26 14:23:49.672: INFO: Got endpoints: latency-svc-gqn5s [7.880903356s] Aug 26 14:23:51.220: INFO: Created: latency-svc-47vd8 Aug 26 14:23:51.224: INFO: Got endpoints: latency-svc-47vd8 [9.06099432s] Aug 26 14:23:51.502: INFO: Created: latency-svc-s5dxs Aug 26 14:23:51.813: INFO: Got endpoints: latency-svc-s5dxs [9.334051933s] Aug 26 14:23:51.911: INFO: Created: latency-svc-zncpl Aug 26 14:23:52.100: INFO: Got endpoints: latency-svc-zncpl [9.226869208s] Aug 26 14:23:52.402: INFO: Created: latency-svc-r22tc Aug 26 14:23:52.453: INFO: Got endpoints: latency-svc-r22tc [9.231719204s] Aug 26 14:23:52.830: INFO: Created: latency-svc-h2d8b Aug 26 14:23:53.017: INFO: Got endpoints: latency-svc-h2d8b [9.733659235s] Aug 26 14:23:53.238: INFO: Created: latency-svc-hdmz2 Aug 26 14:23:53.285: INFO: Got endpoints: latency-svc-hdmz2 [8.75918361s] Aug 26 14:23:53.489: INFO: Created: latency-svc-s5crt Aug 26 14:23:53.501: INFO: Got endpoints: latency-svc-s5crt [8.397392821s] Aug 26 14:23:54.343: INFO: Created: latency-svc-74dnx Aug 26 14:23:54.371: INFO: Got endpoints: latency-svc-74dnx [8.167792418s] Aug 26 14:23:54.900: INFO: Created: latency-svc-l9w9r Aug 26 14:23:55.215: INFO: Got endpoints: latency-svc-l9w9r [8.59775791s] Aug 26 14:23:55.464: INFO: Created: latency-svc-6x8nh Aug 26 14:23:55.532: INFO: Got endpoints: latency-svc-6x8nh [8.527452485s] Aug 26 14:23:55.813: INFO: Created: latency-svc-9dn7h Aug 26 14:23:56.148: INFO: Got endpoints: latency-svc-9dn7h [8.653174951s] Aug 26 14:23:56.939: INFO: Created: latency-svc-fcxsl Aug 26 14:23:57.362: INFO: Created: latency-svc-2bb6n Aug 26 14:23:57.363: INFO: Got endpoints: latency-svc-fcxsl [9.117396794s] Aug 26 14:23:57.605: INFO: Got endpoints: latency-svc-2bb6n [9.049032253s] Aug 26 14:23:58.077: INFO: Created: latency-svc-z5dht Aug 26 14:23:58.124: INFO: Got endpoints: latency-svc-z5dht [8.454657078s] Aug 26 14:23:58.601: INFO: Created: latency-svc-bxpzx Aug 26 14:23:58.769: INFO: Got endpoints: latency-svc-bxpzx [9.096673692s] Aug 26 14:23:58.923: INFO: Created: latency-svc-dw4sv Aug 26 14:23:58.925: INFO: Got endpoints: latency-svc-dw4sv [7.700944613s] Aug 26 14:23:59.123: INFO: Created: latency-svc-ssdfj Aug 26 14:23:59.434: INFO: Created: latency-svc-7z67f Aug 26 14:23:59.435: INFO: Got endpoints: latency-svc-ssdfj [7.621026083s] Aug 26 14:23:59.854: INFO: Got endpoints: latency-svc-7z67f [7.753784125s] Aug 26 14:23:59.856: INFO: Created: latency-svc-mxrcd Aug 26 14:23:59.909: INFO: Got endpoints: latency-svc-mxrcd [7.455747729s] Aug 26 14:24:00.245: INFO: Created: latency-svc-tzvmm Aug 26 14:24:00.318: INFO: Got endpoints: latency-svc-tzvmm [7.300934938s] Aug 26 14:24:00.544: INFO: Created: latency-svc-gnp5g Aug 26 14:24:00.547: INFO: Got endpoints: latency-svc-gnp5g [7.26154618s] Aug 26 14:24:00.916: INFO: Created: latency-svc-prd9s Aug 26 14:24:00.923: INFO: Got endpoints: latency-svc-prd9s [7.421307557s] Aug 26 14:24:00.925: INFO: Latencies: [772.314897ms 1.166339426s 1.167036377s 1.250252754s 1.283658821s 1.348625367s 1.417182017s 1.444261771s 1.459817357s 1.527659009s 1.595867804s 1.681543921s 1.760884853s 1.780668018s 1.965590727s 2.082223629s 2.086457337s 2.163539419s 2.17076259s 2.206029852s 2.216356768s 2.304681122s 2.349407043s 2.415906704s 2.45111737s 2.518625071s 2.545486631s 2.553025838s 2.612329627s 2.615705839s 2.619185296s 2.662684218s 2.677064279s 2.696785733s 2.709628868s 2.764517124s 2.979114159s 3.158634843s 3.272840526s 3.302318701s 3.374644331s 3.400764184s 3.623868866s 3.65370214s 3.694619423s 3.790570633s 3.84670671s 3.856643489s 3.875975662s 3.990757333s 4.016255368s 4.087556706s 4.286427795s 4.303743488s 4.502930683s 4.53824257s 4.550058003s 4.622261845s 4.639310321s 4.763302744s 4.891296458s 4.906457728s 4.952264664s 4.989887223s 5.023744526s 5.069623596s 5.076739174s 5.130929397s 5.179717952s 5.180825653s 5.219191677s 5.277665319s 5.33951578s 5.348215067s 5.499611617s 5.543961707s 5.591947055s 5.619109886s 5.65073593s 5.77389007s 5.837227004s 5.839945025s 5.958402563s 6.067046373s 6.100492665s 6.158373003s 6.246253352s 6.292233253s 6.292300534s 6.310645057s 6.313841328s 6.323637939s 6.327864529s 6.363397982s 6.404453653s 6.452842706s 6.477356899s 6.546459048s 6.603978997s 6.616684129s 6.671446142s 6.701521091s 6.774707478s 6.791368703s 6.791370938s 6.878400065s 6.879596105s 6.915005774s 6.928521662s 6.951162111s 6.957220497s 7.073979848s 7.183153679s 7.26154618s 7.270774296s 7.296529785s 7.298275152s 7.300934938s 7.331028634s 7.389199943s 7.421307557s 7.455697138s 7.455747729s 7.458325537s 7.468382506s 7.490961246s 7.494691327s 7.539021103s 7.570482477s 7.571172244s 7.585096638s 7.621026083s 7.653166652s 7.660078713s 7.673249983s 7.700944613s 7.753784125s 7.80132032s 7.849587583s 7.880903356s 7.912157155s 7.993639761s 7.99402004s 8.095791229s 8.119835242s 8.137708797s 8.151857388s 8.167792418s 8.168167633s 8.370004444s 8.376979568s 8.397392821s 8.40332276s 8.430233226s 8.436309875s 8.454657078s 8.526810075s 8.527452485s 8.59775791s 8.603927399s 8.653174951s 8.682522527s 8.735023999s 8.758683154s 8.75918361s 8.895648775s 8.935154204s 8.944460386s 8.987641351s 9.022128524s 9.049032253s 9.06099432s 9.096673692s 9.117396794s 9.129240701s 9.221175812s 9.226869208s 9.231719204s 9.231829151s 9.246054223s 9.265318685s 9.327218776s 9.334051933s 9.353808431s 9.364974171s 9.379881179s 9.515002285s 9.521388569s 9.613788765s 9.733659235s 9.899897629s 10.089198643s 10.133488455s 10.301944471s 10.573308783s 11.092693008s 11.303520175s 11.406600969s 11.45160648s 11.607910489s] Aug 26 14:24:00.928: INFO: 50 %ile: 6.671446142s Aug 26 14:24:00.928: INFO: 90 %ile: 9.265318685s Aug 26 14:24:00.928: INFO: 99 %ile: 11.45160648s Aug 26 14:24:00.928: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:24:00.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1701" for this suite. • [SLOW TEST:95.279 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":39,"skipped":662,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:24:01.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:24:17.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9461" for this suite. • [SLOW TEST:15.919 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":40,"skipped":665,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:24:17.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-6ecfcea1-b793-4f87-8ee8-4aa249cbc6f1 STEP: Creating a pod to test consume configMaps Aug 26 14:24:17.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0" in namespace "projected-7229" to be "success or failure" Aug 26 14:24:17.923: INFO: Pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.463401ms Aug 26 14:24:20.210: INFO: Pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320393589s Aug 26 14:24:22.271: INFO: Pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380659589s Aug 26 14:24:24.304: INFO: Pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413678215s STEP: Saw pod success Aug 26 14:24:24.304: INFO: Pod "pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0" satisfied condition "success or failure" Aug 26 14:24:24.314: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0 container projected-configmap-volume-test: STEP: delete the pod Aug 26 14:24:24.392: INFO: Waiting for pod pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0 to disappear Aug 26 14:24:24.398: INFO: Pod pod-projected-configmaps-638192eb-5bc4-49cf-aeb1-a17c759f05e0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:24:24.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7229" for this suite. • [SLOW TEST:7.308 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":684,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:24:24.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-edc0d711-3a55-4729-9f38-056c792904c3 STEP: Creating a pod to test consume secrets Aug 26 14:24:24.719: INFO: Waiting up to 5m0s for pod "pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6" in namespace "secrets-3632" to be "success or failure" Aug 26 14:24:24.736: INFO: Pod "pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.435366ms Aug 26 14:24:26.813: INFO: Pod "pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094261819s Aug 26 14:24:28.865: INFO: Pod "pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146142124s STEP: Saw pod success Aug 26 14:24:28.865: INFO: Pod "pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6" satisfied condition "success or failure" Aug 26 14:24:28.900: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6 container secret-volume-test: STEP: delete the pod Aug 26 14:24:31.285: INFO: Waiting for pod pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6 to disappear Aug 26 14:24:31.399: INFO: Pod pod-secrets-f52f3548-3fa2-431e-b764-f240bf09a4e6 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:24:31.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3632" for this suite. • [SLOW TEST:7.017 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:24:31.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 14:24:39.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 14:24:41.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048678, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:24:44.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048678, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:24:46.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048678, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 14:24:48.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048679, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734048678, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 14:24:51.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 26 14:25:00.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1222 to-be-attached-pod -i -c=container1' Aug 26 14:25:02.204: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:25:02.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1222" for this suite. STEP: Destroying namespace "webhook-1222-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:38.034 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":43,"skipped":710,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:25:09.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9692.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9692.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9692.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 138.105.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.105.138_udp@PTR;check="$$(dig +tcp +noall +answer +search 138.105.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.105.138_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9692.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9692.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9692.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9692.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9692.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9692.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 138.105.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.105.138_udp@PTR;check="$$(dig +tcp +noall +answer +search 138.105.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.105.138_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 14:25:38.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:39.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:40.115: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:40.467: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:44.725: INFO: Unable to read jessie_udp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:44.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:45.703: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:46.509: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:50.600: INFO: Lookups using dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7 failed for: [wheezy_udp@dns-test-service.dns-9692.svc.cluster.local wheezy_tcp@dns-test-service.dns-9692.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local jessie_udp@dns-test-service.dns-9692.svc.cluster.local jessie_tcp@dns-test-service.dns-9692.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local] Aug 26 14:25:55.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:56.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:56.828: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:25:57.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:26:00.978: INFO: Unable to read jessie_udp@dns-test-service.dns-9692.svc.cluster.local from pod dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7: the server could not find the requested resource (get pods dns-test-408f9ba0-949d-43d4-95dc-259686972ad7) Aug 26 14:26:05.846: INFO: Lookups using dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7 failed for: [wheezy_udp@dns-test-service.dns-9692.svc.cluster.local wheezy_tcp@dns-test-service.dns-9692.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9692.svc.cluster.local jessie_udp@dns-test-service.dns-9692.svc.cluster.local] Aug 26 14:26:15.614: INFO: DNS probes using dns-9692/dns-test-408f9ba0-949d-43d4-95dc-259686972ad7 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:26:18.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9692" for this suite. • [SLOW TEST:69.679 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":44,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:26:19.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-285ceeb6-2404-4df7-b88e-ffaf875db340 STEP: Creating a pod to test consume configMaps Aug 26 14:26:21.512: INFO: Waiting up to 5m0s for pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7" in namespace "configmap-9484" to be "success or failure" Aug 26 14:26:21.545: INFO: Pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.215335ms Aug 26 14:26:23.665: INFO: Pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152326303s Aug 26 14:26:25.699: INFO: Pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18688582s Aug 26 14:26:27.896: INFO: Pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383624061s STEP: Saw pod success Aug 26 14:26:27.897: INFO: Pod "pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7" satisfied condition "success or failure" Aug 26 14:26:27.927: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7 container configmap-volume-test: STEP: delete the pod Aug 26 14:26:28.624: INFO: Waiting for pod pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7 to disappear Aug 26 14:26:28.664: INFO: Pod pod-configmaps-68c6e55f-07e4-4a24-b8c0-9013ad5f21f7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:26:28.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9484" for this suite. • [SLOW TEST:9.662 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:26:28.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 26 14:26:29.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898255 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 14:26:29.734: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898258 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 26 14:26:29.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898262 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 26 14:26:39.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898331 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 26 14:26:39.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898332 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 26 14:26:39.808: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-330 /api/v1/namespaces/watch-330/configmaps/e2e-watch-test-label-changed 550b75e6-41ca-490a-b697-f3dcaaebe421 3898333 0 2020-08-26 14:26:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:26:39.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-330" for this suite. • [SLOW TEST:10.899 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":46,"skipped":768,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:26:39.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 26 14:26:45.534: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:26:45.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7668" for this suite. • [SLOW TEST:6.051 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":772,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:26:45.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:26:46.281: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 26 14:26:48.414: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 14:26:48.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8862" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":48,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 14:26:48.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 14:26:48.909: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 14:27:06.508: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 14:27:06.537: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 14:27:08.538: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 14:27:09.278: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 14:27:10.538: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 14:27:10.550: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 14:27:12.538: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 14:27:13.338: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 14:27:14.538: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 14:27:14.619: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:27:14.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1603" for this suite.

• [SLOW TEST:25.282 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:27:15.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 26 14:27:15.624: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898555 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 14:27:15.624: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898555 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 26 14:27:25.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898595 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 26 14:27:25.881: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898595 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 26 14:27:35.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898621 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 14:27:35.894: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898621 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 26 14:27:45.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898647 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 14:27:45.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-a 4c711bc5-9589-4c9e-b402-63fa576eb12e 3898647 0 2020-08-26 14:27:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 26 14:27:55.981: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-b 5262dabb-322c-4ed1-b580-66e7e5996555 3898673 0 2020-08-26 14:27:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 14:27:55.982: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-b 5262dabb-322c-4ed1-b580-66e7e5996555 3898673 0 2020-08-26 14:27:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 26 14:28:06.157: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-b 5262dabb-322c-4ed1-b580-66e7e5996555 3898699 0 2020-08-26 14:27:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 14:28:06.157: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4679 /api/v1/namespaces/watch-4679/configmaps/e2e-watch-test-configmap-b 5262dabb-322c-4ed1-b580-66e7e5996555 3898699 0 2020-08-26 14:27:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:28:16.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4679" for this suite.

• [SLOW TEST:61.167 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":51,"skipped":871,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:28:16.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 14:28:17.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1497'
Aug 26 14:28:40.775: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 14:28:40.775: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Aug 26 14:28:41.343: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 26 14:28:41.376: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 26 14:28:41.445: INFO: scanned /root for discovery docs: 
Aug 26 14:28:41.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1497'
Aug 26 14:29:05.572: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 26 14:29:05.572: INFO: stdout: "Created e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03\nScaling up e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 26 14:29:05.572: INFO: stdout: "Created e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03\nScaling up e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 26 14:29:05.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1497'
Aug 26 14:29:06.698: INFO: stderr: ""
Aug 26 14:29:06.699: INFO: stdout: "e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03-xhdnq "
Aug 26 14:29:06.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03-xhdnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1497'
Aug 26 14:29:07.786: INFO: stderr: ""
Aug 26 14:29:07.786: INFO: stdout: "true"
Aug 26 14:29:07.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03-xhdnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1497'
Aug 26 14:29:09.075: INFO: stderr: ""
Aug 26 14:29:09.076: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 26 14:29:09.076: INFO: e2e-test-httpd-rc-b72e6bf7dfecda3974dfea46f5e2fe03-xhdnq is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 26 14:29:09.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1497'
Aug 26 14:29:10.527: INFO: stderr: ""
Aug 26 14:29:10.527: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:29:10.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1497" for this suite.

• [SLOW TEST:55.250 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":52,"skipped":890,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:29:11.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 26 14:29:13.637: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6505 /api/v1/namespaces/watch-6505/configmaps/e2e-watch-test-resource-version a8da7de3-61ff-47a0-8b1d-e98038c5eee2 3898933 0 2020-08-26 14:29:11 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 14:29:13.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6505 /api/v1/namespaces/watch-6505/configmaps/e2e-watch-test-resource-version a8da7de3-61ff-47a0-8b1d-e98038c5eee2 3898935 0 2020-08-26 14:29:11 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:29:13.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6505" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":53,"skipped":929,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:29:13.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 26 14:29:14.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 26 14:29:15.183: INFO: stderr: ""
Aug 26 14:29:15.183: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:29:15.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-40" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":54,"skipped":951,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:29:15.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-c9d5c043-04f8-4f68-acf4-c14628051276
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:29:16.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-71" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":55,"skipped":952,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:29:16.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:29:31.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3848" for this suite.

• [SLOW TEST:15.630 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":961,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:29:32.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-26, will wait for the garbage collector to delete the pods
Aug 26 14:29:52.668: INFO: Deleting Job.batch foo took: 295.476916ms
Aug 26 14:29:53.769: INFO: Terminating Job.batch foo pods took: 1.100989713s
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:30:31.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-26" for this suite.

• [SLOW TEST:59.326 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":57,"skipped":963,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:30:31.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 14:30:48.655: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 14:30:51.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049049, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 14:30:53.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049049, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 14:30:57.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049049, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 14:30:57.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049049, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049048, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 14:31:00.872: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:31:00.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:31:02.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8777" for this suite.
STEP: Destroying namespace "webhook-8777-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:31.421 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":58,"skipped":968,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:31:03.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-17e6d6ee-b9c9-454e-af7f-cf7f1cd4063a
STEP: Creating a pod to test consume configMaps
Aug 26 14:31:04.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170" in namespace "configmap-5233" to be "success or failure"
Aug 26 14:31:04.771: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Pending", Reason="", readiness=false. Elapsed: 458.777513ms
Aug 26 14:31:06.779: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466713995s
Aug 26 14:31:09.360: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Pending", Reason="", readiness=false. Elapsed: 5.047868216s
Aug 26 14:31:11.528: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Pending", Reason="", readiness=false. Elapsed: 7.215284896s
Aug 26 14:31:13.536: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Running", Reason="", readiness=true. Elapsed: 9.223368235s
Aug 26 14:31:15.542: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.230082924s
STEP: Saw pod success
Aug 26 14:31:15.543: INFO: Pod "pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170" satisfied condition "success or failure"
Aug 26 14:31:15.547: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170 container configmap-volume-test: 
STEP: delete the pod
Aug 26 14:31:15.582: INFO: Waiting for pod pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170 to disappear
Aug 26 14:31:15.598: INFO: Pod pod-configmaps-6fe75c60-3b7d-4350-9fdb-a8640db5e170 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:31:15.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5233" for this suite.

• [SLOW TEST:12.280 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":974,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:31:15.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 14:31:30.020: INFO: DNS probes using dns-test-8c493255-6529-404f-9736-1ba8d06926f9 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 14:31:50.968: INFO: File wheezy_udp@dns-test-service-3.dns-5317.svc.cluster.local from pod  dns-5317/dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 14:31:50.974: INFO: File jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local from pod  dns-5317/dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 14:31:50.975: INFO: Lookups using dns-5317/dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab failed for: [wheezy_udp@dns-test-service-3.dns-5317.svc.cluster.local jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local]

Aug 26 14:31:56.099: INFO: File jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local from pod  dns-5317/dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 14:31:56.099: INFO: Lookups using dns-5317/dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab failed for: [jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local]

Aug 26 14:32:01.793: INFO: DNS probes using dns-test-dd8e9633-c4a9-4f39-b5aa-9839f2bfc2ab succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5317.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5317.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 14:32:22.599: INFO: DNS probes using dns-test-41a7feca-940c-47b3-bdfe-8df466db7a5e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:32:23.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5317" for this suite.

• [SLOW TEST:68.042 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":60,"skipped":985,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:32:23.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:32:24.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-851" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":61,"skipped":999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:32:24.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 26 14:32:25.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9288'
Aug 26 14:32:27.895: INFO: stderr: ""
Aug 26 14:32:27.895: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 14:32:27.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9288'
Aug 26 14:32:29.520: INFO: stderr: ""
Aug 26 14:32:29.521: INFO: stdout: "update-demo-nautilus-f9m25 update-demo-nautilus-h8ljj "
Aug 26 14:32:29.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9m25 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9288'
Aug 26 14:32:30.809: INFO: stderr: ""
Aug 26 14:32:30.809: INFO: stdout: ""
Aug 26 14:32:30.809: INFO: update-demo-nautilus-f9m25 is created but not running
Aug 26 14:32:35.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9288'
Aug 26 14:32:37.394: INFO: stderr: ""
Aug 26 14:32:37.394: INFO: stdout: "update-demo-nautilus-f9m25 update-demo-nautilus-h8ljj "
Aug 26 14:32:37.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9m25 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9288'
Aug 26 14:32:38.774: INFO: stderr: ""
Aug 26 14:32:38.774: INFO: stdout: "true"
Aug 26 14:32:38.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9m25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9288'
Aug 26 14:32:40.100: INFO: stderr: ""
Aug 26 14:32:40.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:32:40.100: INFO: validating pod update-demo-nautilus-f9m25
Aug 26 14:32:40.389: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:32:40.390: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:32:40.390: INFO: update-demo-nautilus-f9m25 is verified up and running
Aug 26 14:32:40.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8ljj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9288'
Aug 26 14:32:41.651: INFO: stderr: ""
Aug 26 14:32:41.652: INFO: stdout: "true"
Aug 26 14:32:41.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8ljj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9288'
Aug 26 14:32:43.746: INFO: stderr: ""
Aug 26 14:32:43.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:32:43.746: INFO: validating pod update-demo-nautilus-h8ljj
Aug 26 14:32:44.226: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:32:44.226: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:32:44.226: INFO: update-demo-nautilus-h8ljj is verified up and running
STEP: using delete to clean up resources
Aug 26 14:32:44.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9288'
Aug 26 14:32:45.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 14:32:45.774: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 26 14:32:45.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9288'
Aug 26 14:32:47.168: INFO: stderr: "No resources found in kubectl-9288 namespace.\n"
Aug 26 14:32:47.169: INFO: stdout: ""
Aug 26 14:32:47.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9288 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 14:32:48.508: INFO: stderr: ""
Aug 26 14:32:48.508: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:32:48.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9288" for this suite.

• [SLOW TEST:24.484 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":62,"skipped":1034,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:32:48.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-de982895-5136-4d60-bc36-5626d83eafd9
STEP: Creating secret with name s-test-opt-upd-d7cb0ccb-ef87-4916-b5a8-cae413bf87d7
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-de982895-5136-4d60-bc36-5626d83eafd9
STEP: Updating secret s-test-opt-upd-d7cb0ccb-ef87-4916-b5a8-cae413bf87d7
STEP: Creating secret with name s-test-opt-create-078d2f19-20a0-4ae1-aaf7-81219fb535fe
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:33:13.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3511" for this suite.

• [SLOW TEST:25.016 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1045,"failed":0}
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:33:14.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 26 14:33:24.809: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:33:25.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9209" for this suite.

• [SLOW TEST:12.333 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":64,"skipped":1045,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:33:26.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-713c1e85-08df-46c3-a1e4-d645875b4773
STEP: Creating a pod to test consume configMaps
Aug 26 14:33:28.407: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6" in namespace "projected-3596" to be "success or failure"
Aug 26 14:33:29.326: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6": Phase="Pending", Reason="", readiness=false. Elapsed: 919.279228ms
Aug 26 14:33:31.807: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400017156s
Aug 26 14:33:33.983: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.575977878s
Aug 26 14:33:36.128: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.720873208s
Aug 26 14:33:38.492: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08542543s
STEP: Saw pod success
Aug 26 14:33:38.492: INFO: Pod "pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6" satisfied condition "success or failure"
Aug 26 14:33:38.497: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 14:33:39.924: INFO: Waiting for pod pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6 to disappear
Aug 26 14:33:40.203: INFO: Pod pod-projected-configmaps-6819a986-f279-4ec0-b5a5-4b01bc0e88e6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:33:40.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3596" for this suite.

• [SLOW TEST:14.426 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1059,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:33:40.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:33:42.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 14:34:01.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 create -f -'
Aug 26 14:34:19.824: INFO: stderr: ""
Aug 26 14:34:19.824: INFO: stdout: "e2e-test-crd-publish-openapi-6166-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 14:34:19.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 delete e2e-test-crd-publish-openapi-6166-crds test-cr'
Aug 26 14:34:21.341: INFO: stderr: ""
Aug 26 14:34:21.341: INFO: stdout: "e2e-test-crd-publish-openapi-6166-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 26 14:34:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 apply -f -'
Aug 26 14:34:23.001: INFO: stderr: ""
Aug 26 14:34:23.001: INFO: stdout: "e2e-test-crd-publish-openapi-6166-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 14:34:23.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5619 delete e2e-test-crd-publish-openapi-6166-crds test-cr'
Aug 26 14:34:24.286: INFO: stderr: ""
Aug 26 14:34:24.286: INFO: stdout: "e2e-test-crd-publish-openapi-6166-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 26 14:34:24.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6166-crds'
Aug 26 14:34:25.750: INFO: stderr: ""
Aug 26 14:34:25.750: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6166-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:34:44.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5619" for this suite.

• [SLOW TEST:63.779 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":66,"skipped":1066,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:34:44.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 26 14:34:45.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8263 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 26 14:34:56.589: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0826 14:34:56.457077     644 log.go:172] (0x28d4070) (0x28d40e0) Create stream\nI0826 14:34:56.459224     644 log.go:172] (0x28d4070) (0x28d40e0) Stream added, broadcasting: 1\nI0826 14:34:56.479805     644 log.go:172] (0x28d4070) Reply frame received for 1\nI0826 14:34:56.480514     644 log.go:172] (0x28d4070) (0x2bde070) Create stream\nI0826 14:34:56.480595     644 log.go:172] (0x28d4070) (0x2bde070) Stream added, broadcasting: 3\nI0826 14:34:56.482251     644 log.go:172] (0x28d4070) Reply frame received for 3\nI0826 14:34:56.482530     644 log.go:172] (0x28d4070) (0x2bde310) Create stream\nI0826 14:34:56.482599     644 log.go:172] (0x28d4070) (0x2bde310) Stream added, broadcasting: 5\nI0826 14:34:56.484066     644 log.go:172] (0x28d4070) Reply frame received for 5\nI0826 14:34:56.484290     644 log.go:172] (0x28d4070) (0x2b50070) Create stream\nI0826 14:34:56.484362     644 log.go:172] (0x28d4070) (0x2b50070) Stream added, broadcasting: 7\nI0826 14:34:56.485806     644 log.go:172] (0x28d4070) Reply frame received for 7\nI0826 14:34:56.488129     644 log.go:172] (0x2bde070) (3) Writing data frame\nI0826 14:34:56.489641     644 log.go:172] (0x2bde070) (3) Writing data frame\nI0826 14:34:56.490844     644 log.go:172] (0x28d4070) Data frame received for 5\nI0826 14:34:56.491060     644 log.go:172] (0x2bde310) (5) Data frame handling\nI0826 14:34:56.491401     644 log.go:172] (0x2bde310) (5) Data frame sent\nI0826 14:34:56.491874     644 log.go:172] (0x28d4070) Data frame received for 5\nI0826 14:34:56.491967     644 log.go:172] (0x2bde310) (5) Data frame handling\nI0826 14:34:56.492071     644 log.go:172] (0x2bde310) (5) Data frame sent\nI0826 14:34:56.522122     644 log.go:172] (0x28d4070) Data frame received for 7\nI0826 14:34:56.522640     644 log.go:172] (0x28d4070) Data frame received for 1\nI0826 14:34:56.522979     644 log.go:172] (0x28d40e0) (1) Data frame handling\nI0826 14:34:56.523155     644 log.go:172] (0x2b50070) (7) Data frame handling\nI0826 14:34:56.523516     644 log.go:172] (0x28d4070) Data frame received for 5\nI0826 14:34:56.523762     644 log.go:172] (0x2bde310) (5) Data frame handling\nI0826 14:34:56.524165     644 log.go:172] (0x28d40e0) (1) Data frame sent\nI0826 14:34:56.525222     644 log.go:172] (0x28d4070) (0x28d40e0) Stream removed, broadcasting: 1\nI0826 14:34:56.532523     644 log.go:172] (0x28d4070) (0x2bde070) Stream removed, broadcasting: 3\nI0826 14:34:56.533629     644 log.go:172] (0x28d4070) (0x28d40e0) Stream removed, broadcasting: 1\nI0826 14:34:56.534976     644 log.go:172] (0x28d4070) (0x2bde070) Stream removed, broadcasting: 3\nI0826 14:34:56.535070     644 log.go:172] (0x28d4070) (0x2bde310) Stream removed, broadcasting: 5\nI0826 14:34:56.536905     644 log.go:172] (0x28d4070) (0x2b50070) Stream removed, broadcasting: 7\nI0826 14:34:56.541194     644 log.go:172] (0x28d4070) Go away received\n"
Aug 26 14:34:56.591: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:34:59.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8263" for this suite.

• [SLOW TEST:15.667 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":67,"skipped":1074,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:35:00.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:35:20.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7444" for this suite.

• [SLOW TEST:20.862 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":68,"skipped":1076,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:35:21.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-51ed65ad-f2f3-464a-95ec-f9fd36fa7cf7 in namespace container-probe-207
Aug 26 14:35:30.020: INFO: Started pod busybox-51ed65ad-f2f3-464a-95ec-f9fd36fa7cf7 in namespace container-probe-207
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 14:35:30.025: INFO: Initial restart count of pod busybox-51ed65ad-f2f3-464a-95ec-f9fd36fa7cf7 is 0
Aug 26 14:36:23.420: INFO: Restart count of pod container-probe-207/busybox-51ed65ad-f2f3-464a-95ec-f9fd36fa7cf7 is now 1 (53.394777137s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:36:23.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-207" for this suite.

• [SLOW TEST:62.398 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:36:23.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4487
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4487
I0826 14:36:23.727029       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4487, replica count: 2
I0826 14:36:26.778813       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 14:36:29.779663       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 14:36:32.780393       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 14:36:32.780: INFO: Creating new exec pod
Aug 26 14:36:43.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4487 execpodkzx4z -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 26 14:36:44.722: INFO: stderr: "I0826 14:36:44.567405     668 log.go:172] (0x2930770) (0x29307e0) Create stream\nI0826 14:36:44.569474     668 log.go:172] (0x2930770) (0x29307e0) Stream added, broadcasting: 1\nI0826 14:36:44.587987     668 log.go:172] (0x2930770) Reply frame received for 1\nI0826 14:36:44.588410     668 log.go:172] (0x2930770) (0x25ee850) Create stream\nI0826 14:36:44.588470     668 log.go:172] (0x2930770) (0x25ee850) Stream added, broadcasting: 3\nI0826 14:36:44.589666     668 log.go:172] (0x2930770) Reply frame received for 3\nI0826 14:36:44.589947     668 log.go:172] (0x2930770) (0x25ef650) Create stream\nI0826 14:36:44.590043     668 log.go:172] (0x2930770) (0x25ef650) Stream added, broadcasting: 5\nI0826 14:36:44.591298     668 log.go:172] (0x2930770) Reply frame received for 5\nI0826 14:36:44.671902     668 log.go:172] (0x2930770) Data frame received for 5\nI0826 14:36:44.672220     668 log.go:172] (0x25ef650) (5) Data frame handling\nI0826 14:36:44.672933     668 log.go:172] (0x25ef650) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0826 14:36:44.699665     668 log.go:172] (0x2930770) Data frame received for 5\nI0826 14:36:44.699898     668 log.go:172] (0x2930770) Data frame received for 3\nI0826 14:36:44.700114     668 log.go:172] (0x25ee850) (3) Data frame handling\nI0826 14:36:44.700331     668 log.go:172] (0x25ef650) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0826 14:36:44.700532     668 log.go:172] (0x25ef650) (5) Data frame sent\nI0826 14:36:44.701067     668 log.go:172] (0x2930770) Data frame received for 5\nI0826 14:36:44.701181     668 log.go:172] (0x25ef650) (5) Data frame handling\nI0826 14:36:44.702384     668 log.go:172] (0x2930770) Data frame received for 1\nI0826 14:36:44.702529     668 log.go:172] (0x29307e0) (1) Data frame handling\nI0826 14:36:44.702780     668 log.go:172] (0x29307e0) (1) Data frame sent\nI0826 14:36:44.703724     668 log.go:172] (0x2930770) (0x29307e0) Stream removed, broadcasting: 1\nI0826 14:36:44.705655     668 log.go:172] (0x2930770) Go away received\nI0826 14:36:44.709216     668 log.go:172] (0x2930770) (0x29307e0) Stream removed, broadcasting: 1\nI0826 14:36:44.709450     668 log.go:172] (0x2930770) (0x25ee850) Stream removed, broadcasting: 3\nI0826 14:36:44.709616     668 log.go:172] (0x2930770) (0x25ef650) Stream removed, broadcasting: 5\n"
Aug 26 14:36:44.723: INFO: stdout: ""
Aug 26 14:36:44.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4487 execpodkzx4z -- /bin/sh -x -c nc -zv -t -w 2 10.96.69.33 80'
Aug 26 14:36:46.090: INFO: stderr: "I0826 14:36:46.003168     691 log.go:172] (0x25ec000) (0x25ec070) Create stream\nI0826 14:36:46.005557     691 log.go:172] (0x25ec000) (0x25ec070) Stream added, broadcasting: 1\nI0826 14:36:46.017922     691 log.go:172] (0x25ec000) Reply frame received for 1\nI0826 14:36:46.018546     691 log.go:172] (0x25ec000) (0x25e0690) Create stream\nI0826 14:36:46.018616     691 log.go:172] (0x25ec000) (0x25e0690) Stream added, broadcasting: 3\nI0826 14:36:46.020070     691 log.go:172] (0x25ec000) Reply frame received for 3\nI0826 14:36:46.020401     691 log.go:172] (0x25ec000) (0x25e19d0) Create stream\nI0826 14:36:46.020495     691 log.go:172] (0x25ec000) (0x25e19d0) Stream added, broadcasting: 5\nI0826 14:36:46.022182     691 log.go:172] (0x25ec000) Reply frame received for 5\nI0826 14:36:46.072539     691 log.go:172] (0x25ec000) Data frame received for 5\nI0826 14:36:46.072939     691 log.go:172] (0x25e19d0) (5) Data frame handling\nI0826 14:36:46.073072     691 log.go:172] (0x25ec000) Data frame received for 1\nI0826 14:36:46.073254     691 log.go:172] (0x25ec070) (1) Data frame handling\nI0826 14:36:46.073449     691 log.go:172] (0x25ec000) Data frame received for 3\nI0826 14:36:46.073635     691 log.go:172] (0x25e0690) (3) Data frame handling\nI0826 14:36:46.074204     691 log.go:172] (0x25ec070) (1) Data frame sent\nI0826 14:36:46.074581     691 log.go:172] (0x25e19d0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.69.33 80\nConnection to 10.96.69.33 80 port [tcp/http] succeeded!\nI0826 14:36:46.075427     691 log.go:172] (0x25ec000) Data frame received for 5\nI0826 14:36:46.075564     691 log.go:172] (0x25e19d0) (5) Data frame handling\nI0826 14:36:46.076540     691 log.go:172] (0x25ec000) (0x25ec070) Stream removed, broadcasting: 1\nI0826 14:36:46.077909     691 log.go:172] (0x25ec000) Go away received\nI0826 14:36:46.080044     691 log.go:172] (0x25ec000) (0x25ec070) Stream removed, broadcasting: 1\nI0826 14:36:46.080254     691 log.go:172] (0x25ec000) (0x25e0690) Stream removed, broadcasting: 3\nI0826 14:36:46.080425     691 log.go:172] (0x25ec000) (0x25e19d0) Stream removed, broadcasting: 5\n"
Aug 26 14:36:46.091: INFO: stdout: ""
Aug 26 14:36:46.091: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:36:46.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4487" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.674 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":70,"skipped":1111,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:36:46.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-dd4b835a-0c66-4cad-a267-9314098c6b18
STEP: Creating secret with name secret-projected-all-test-volume-cb1a1c25-d2cd-40c8-beed-3d419f447ebe
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 26 14:36:46.281: INFO: Waiting up to 5m0s for pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285" in namespace "projected-3074" to be "success or failure"
Aug 26 14:36:46.294: INFO: Pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285": Phase="Pending", Reason="", readiness=false. Elapsed: 12.718727ms
Aug 26 14:36:48.301: INFO: Pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0195652s
Aug 26 14:36:50.494: INFO: Pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285": Phase="Running", Reason="", readiness=true. Elapsed: 4.213155931s
Aug 26 14:36:52.752: INFO: Pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.470447003s
STEP: Saw pod success
Aug 26 14:36:52.752: INFO: Pod "projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285" satisfied condition "success or failure"
Aug 26 14:36:52.791: INFO: Trying to get logs from node jerma-worker pod projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285 container projected-all-volume-test: 
STEP: delete the pod
Aug 26 14:36:52.941: INFO: Waiting for pod projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285 to disappear
Aug 26 14:36:52.969: INFO: Pod projected-volume-8baed61e-5a5c-4926-bd44-899c0ca4f285 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:36:52.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3074" for this suite.

• [SLOW TEST:7.348 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:36:53.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:36:59.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3840" for this suite.

• [SLOW TEST:5.655 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1191,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:36:59.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 14:36:59.544: INFO: Waiting up to 5m0s for pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a" in namespace "emptydir-908" to be "success or failure"
Aug 26 14:36:59.600: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.7472ms
Aug 26 14:37:01.606: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061994699s
Aug 26 14:37:03.710: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166077326s
Aug 26 14:37:05.853: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309172918s
Aug 26 14:37:07.897: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35289503s
Aug 26 14:37:10.369: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.824817362s
STEP: Saw pod success
Aug 26 14:37:10.369: INFO: Pod "pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a" satisfied condition "success or failure"
Aug 26 14:37:10.374: INFO: Trying to get logs from node jerma-worker2 pod pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a container test-container: 
STEP: delete the pod
Aug 26 14:37:11.695: INFO: Waiting for pod pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a to disappear
Aug 26 14:37:11.749: INFO: Pod pod-122da1a4-5ef5-4a0c-935f-87532c06bd0a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:37:11.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-908" for this suite.

• [SLOW TEST:12.590 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1218,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:37:11.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 26 14:37:17.893: INFO: &Pod{ObjectMeta:{send-events-10eada98-b722-4e53-ac2d-653144074ae4  events-2233 /api/v1/namespaces/events-2233/pods/send-events-10eada98-b722-4e53-ac2d-653144074ae4 4f3262a0-2798-42b6-8b84-0ec4d73e1330 3900918 0 2020-08-26 14:37:13 +0000 UTC   map[name:foo time:679621287] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n4kqm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n4kqm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n4kqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:37:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:37:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:37:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 14:37:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.30,StartTime:2020-08-26 14:37:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 14:37:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://f28cb29ae171ec055646bcdae779a4c6a7919cd60e6afc405f1e3c0ef1ac97e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 26 14:37:19.906: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 26 14:37:21.915: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:37:21.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2233" for this suite.

• [SLOW TEST:10.205 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":74,"skipped":1284,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:37:21.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 14:37:34.608: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 14:37:34.624: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 14:37:36.624: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 14:37:36.655: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 14:37:38.625: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 14:37:38.630: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 14:37:40.625: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 14:37:40.631: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 14:37:42.625: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 14:37:42.636: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:37:42.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2092" for this suite.

• [SLOW TEST:20.675 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1287,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:37:42.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3374
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3374
STEP: creating replication controller externalsvc in namespace services-3374
I0826 14:37:42.906329       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3374, replica count: 2
I0826 14:37:45.958013       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 14:37:48.958780       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 26 14:37:49.031: INFO: Creating new exec pod
Aug 26 14:37:53.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3374 execpods5zrh -- /bin/sh -x -c nslookup clusterip-service'
Aug 26 14:37:54.591: INFO: stderr: "I0826 14:37:54.374760     713 log.go:172] (0x2c6e150) (0x2c6e1c0) Create stream\nI0826 14:37:54.378285     713 log.go:172] (0x2c6e150) (0x2c6e1c0) Stream added, broadcasting: 1\nI0826 14:37:54.398810     713 log.go:172] (0x2c6e150) Reply frame received for 1\nI0826 14:37:54.399339     713 log.go:172] (0x2c6e150) (0x24a2150) Create stream\nI0826 14:37:54.399403     713 log.go:172] (0x2c6e150) (0x24a2150) Stream added, broadcasting: 3\nI0826 14:37:54.401344     713 log.go:172] (0x2c6e150) Reply frame received for 3\nI0826 14:37:54.401799     713 log.go:172] (0x2c6e150) (0x271d110) Create stream\nI0826 14:37:54.401906     713 log.go:172] (0x2c6e150) (0x271d110) Stream added, broadcasting: 5\nI0826 14:37:54.403416     713 log.go:172] (0x2c6e150) Reply frame received for 5\nI0826 14:37:54.496074     713 log.go:172] (0x2c6e150) Data frame received for 5\nI0826 14:37:54.496265     713 log.go:172] (0x271d110) (5) Data frame handling\nI0826 14:37:54.496610     713 log.go:172] (0x271d110) (5) Data frame sent\n+ nslookup clusterip-service\nI0826 14:37:54.569922     713 log.go:172] (0x2c6e150) Data frame received for 3\nI0826 14:37:54.570163     713 log.go:172] (0x24a2150) (3) Data frame handling\nI0826 14:37:54.570330     713 log.go:172] (0x24a2150) (3) Data frame sent\nI0826 14:37:54.570465     713 log.go:172] (0x2c6e150) Data frame received for 3\nI0826 14:37:54.570584     713 log.go:172] (0x24a2150) (3) Data frame handling\nI0826 14:37:54.570970     713 log.go:172] (0x2c6e150) Data frame received for 5\nI0826 14:37:54.571156     713 log.go:172] (0x271d110) (5) Data frame handling\nI0826 14:37:54.571312     713 log.go:172] (0x24a2150) (3) Data frame sent\nI0826 14:37:54.571436     713 log.go:172] (0x2c6e150) Data frame received for 3\nI0826 14:37:54.571536     713 log.go:172] (0x24a2150) (3) Data frame handling\nI0826 14:37:54.573137     713 log.go:172] (0x2c6e150) Data frame received for 1\nI0826 14:37:54.573320     713 log.go:172] (0x2c6e1c0) (1) Data frame handling\nI0826 14:37:54.573568     713 log.go:172] (0x2c6e1c0) (1) Data frame sent\nI0826 14:37:54.574863     713 log.go:172] (0x2c6e150) (0x2c6e1c0) Stream removed, broadcasting: 1\nI0826 14:37:54.576392     713 log.go:172] (0x2c6e150) Go away received\nI0826 14:37:54.580949     713 log.go:172] (0x2c6e150) (0x2c6e1c0) Stream removed, broadcasting: 1\nI0826 14:37:54.581125     713 log.go:172] (0x2c6e150) (0x24a2150) Stream removed, broadcasting: 3\nI0826 14:37:54.581260     713 log.go:172] (0x2c6e150) (0x271d110) Stream removed, broadcasting: 5\n"
Aug 26 14:37:54.592: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3374.svc.cluster.local\tcanonical name = externalsvc.services-3374.svc.cluster.local.\nName:\texternalsvc.services-3374.svc.cluster.local\nAddress: 10.110.82.144\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3374, will wait for the garbage collector to delete the pods
Aug 26 14:37:54.657: INFO: Deleting ReplicationController externalsvc took: 8.539566ms
Aug 26 14:37:54.958: INFO: Terminating ReplicationController externalsvc pods took: 300.857279ms
Aug 26 14:38:11.704: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:38:11.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3374" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:29.112 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":76,"skipped":1293,"failed":0}
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:38:11.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 26 14:38:16.416: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5300 pod-service-account-22856915-0276-47c5-be29-1f3905592738 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 26 14:38:17.885: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5300 pod-service-account-22856915-0276-47c5-be29-1f3905592738 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 26 14:38:19.246: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5300 pod-service-account-22856915-0276-47c5-be29-1f3905592738 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:38:20.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5300" for this suite.

• [SLOW TEST:8.912 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":77,"skipped":1300,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:38:20.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-fa3a66aa-3af9-43ef-aaba-e1390fae296d
STEP: Creating a pod to test consume configMaps
Aug 26 14:38:20.754: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9" in namespace "projected-9691" to be "success or failure"
Aug 26 14:38:20.789: INFO: Pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.910734ms
Aug 26 14:38:22.957: INFO: Pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202962522s
Aug 26 14:38:25.004: INFO: Pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250352922s
Aug 26 14:38:27.012: INFO: Pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257989386s
STEP: Saw pod success
Aug 26 14:38:27.012: INFO: Pod "pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9" satisfied condition "success or failure"
Aug 26 14:38:27.064: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 14:38:27.194: INFO: Waiting for pod pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9 to disappear
Aug 26 14:38:27.201: INFO: Pod pod-projected-configmaps-7ca5970e-a431-46dd-a4ca-f2c222c8eff9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:38:27.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9691" for this suite.

• [SLOW TEST:6.538 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1312,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:38:27.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-059f00e7-bd89-4c7e-b595-5784134ba63b
STEP: Creating a pod to test consume configMaps
Aug 26 14:38:27.461: INFO: Waiting up to 5m0s for pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db" in namespace "configmap-3960" to be "success or failure"
Aug 26 14:38:27.519: INFO: Pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db": Phase="Pending", Reason="", readiness=false. Elapsed: 57.765209ms
Aug 26 14:38:29.657: INFO: Pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195233396s
Aug 26 14:38:31.702: INFO: Pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240519367s
Aug 26 14:38:33.708: INFO: Pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24645599s
STEP: Saw pod success
Aug 26 14:38:33.708: INFO: Pod "pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db" satisfied condition "success or failure"
Aug 26 14:38:33.825: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db container configmap-volume-test: 
STEP: delete the pod
Aug 26 14:38:33.850: INFO: Waiting for pod pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db to disappear
Aug 26 14:38:33.854: INFO: Pod pod-configmaps-c64966d1-b0be-4bbf-b72a-87e409e5a4db no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:38:33.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3960" for this suite.

• [SLOW TEST:6.651 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1314,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:38:33.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-0c251694-f92e-41b2-9f33-cca37fe5233f
STEP: Creating a pod to test consume secrets
Aug 26 14:38:34.140: INFO: Waiting up to 5m0s for pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c" in namespace "secrets-9034" to be "success or failure"
Aug 26 14:38:34.167: INFO: Pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.71483ms
Aug 26 14:38:36.706: INFO: Pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56575208s
Aug 26 14:38:38.714: INFO: Pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573262895s
Aug 26 14:38:40.720: INFO: Pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579728156s
STEP: Saw pod success
Aug 26 14:38:40.721: INFO: Pod "pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c" satisfied condition "success or failure"
Aug 26 14:38:40.725: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c container secret-volume-test: 
STEP: delete the pod
Aug 26 14:38:40.753: INFO: Waiting for pod pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c to disappear
Aug 26 14:38:40.763: INFO: Pod pod-secrets-8030d4e4-6168-44cb-9a4e-938936d0237c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:38:40.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9034" for this suite.

• [SLOW TEST:6.903 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:38:40.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 26 14:38:41.211: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:01.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9365" for this suite.

• [SLOW TEST:20.842 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1397,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:01.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-hsfb
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 14:39:01.816: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hsfb" in namespace "subpath-5481" to be "success or failure"
Aug 26 14:39:01.838: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.262478ms
Aug 26 14:39:03.987: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169901151s
Aug 26 14:39:05.993: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176744605s
Aug 26 14:39:08.046: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 6.229365376s
Aug 26 14:39:10.053: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 8.236117648s
Aug 26 14:39:12.059: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 10.241911989s
Aug 26 14:39:14.389: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 12.572280157s
Aug 26 14:39:16.903: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 15.085990008s
Aug 26 14:39:18.908: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 17.091440641s
Aug 26 14:39:20.914: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 19.097244471s
Aug 26 14:39:22.922: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 21.105521502s
Aug 26 14:39:24.929: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Running", Reason="", readiness=true. Elapsed: 23.112638447s
Aug 26 14:39:26.961: INFO: Pod "pod-subpath-test-configmap-hsfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.144679581s
STEP: Saw pod success
Aug 26 14:39:26.962: INFO: Pod "pod-subpath-test-configmap-hsfb" satisfied condition "success or failure"
Aug 26 14:39:26.965: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-hsfb container test-container-subpath-configmap-hsfb: 
STEP: delete the pod
Aug 26 14:39:27.012: INFO: Waiting for pod pod-subpath-test-configmap-hsfb to disappear
Aug 26 14:39:27.123: INFO: Pod pod-subpath-test-configmap-hsfb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hsfb
Aug 26 14:39:27.124: INFO: Deleting pod "pod-subpath-test-configmap-hsfb" in namespace "subpath-5481"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5481" for this suite.

• [SLOW TEST:25.530 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":82,"skipped":1403,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:27.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:38.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6403" for this suite.

• [SLOW TEST:11.698 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":83,"skipped":1403,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:38.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-920d7f2f-7e36-441f-a45f-dac6e7a17f69
STEP: Creating a pod to test consume secrets
Aug 26 14:39:39.421: INFO: Waiting up to 5m0s for pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7" in namespace "secrets-9204" to be "success or failure"
Aug 26 14:39:39.478: INFO: Pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 56.731387ms
Aug 26 14:39:41.563: INFO: Pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141751581s
Aug 26 14:39:43.569: INFO: Pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148475109s
Aug 26 14:39:45.577: INFO: Pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.156491049s
STEP: Saw pod success
Aug 26 14:39:45.577: INFO: Pod "pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7" satisfied condition "success or failure"
Aug 26 14:39:45.641: INFO: Trying to get logs from node jerma-worker pod pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7 container secret-volume-test: 
STEP: delete the pod
Aug 26 14:39:45.857: INFO: Waiting for pod pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7 to disappear
Aug 26 14:39:45.903: INFO: Pod pod-secrets-53653cc5-b65f-43af-a7b6-f49c67799ce7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:45.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9204" for this suite.

• [SLOW TEST:7.062 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1444,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:45.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 14:39:46.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db" in namespace "projected-3723" to be "success or failure"
Aug 26 14:39:46.402: INFO: Pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db": Phase="Pending", Reason="", readiness=false. Elapsed: 184.961571ms
Aug 26 14:39:48.472: INFO: Pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254801723s
Aug 26 14:39:50.628: INFO: Pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411409119s
Aug 26 14:39:52.635: INFO: Pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418344324s
STEP: Saw pod success
Aug 26 14:39:52.636: INFO: Pod "downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db" satisfied condition "success or failure"
Aug 26 14:39:52.753: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db container client-container: 
STEP: delete the pod
Aug 26 14:39:52.845: INFO: Waiting for pod downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db to disappear
Aug 26 14:39:53.094: INFO: Pod downwardapi-volume-d0e5f413-9aaa-48ce-9f4c-ffeabeed08db no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:53.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3723" for this suite.

• [SLOW TEST:7.193 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1466,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:53.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-5850/configmap-test-7016daf7-38b6-4e68-ad98-b758058edf2f
STEP: Creating a pod to test consume configMaps
Aug 26 14:39:54.057: INFO: Waiting up to 5m0s for pod "pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8" in namespace "configmap-5850" to be "success or failure"
Aug 26 14:39:54.208: INFO: Pod "pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 150.44765ms
Aug 26 14:39:56.589: INFO: Pod "pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531174646s
Aug 26 14:39:58.677: INFO: Pod "pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.619393236s
STEP: Saw pod success
Aug 26 14:39:58.677: INFO: Pod "pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8" satisfied condition "success or failure"
Aug 26 14:39:58.738: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8 container env-test: 
STEP: delete the pod
Aug 26 14:39:59.342: INFO: Waiting for pod pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8 to disappear
Aug 26 14:39:59.413: INFO: Pod pod-configmaps-fabfe0c4-4aae-438b-b104-e7c90a564ab8 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:39:59.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5850" for this suite.

• [SLOW TEST:6.318 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1468,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:39:59.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 26 14:40:08.211: INFO: Successfully updated pod "adopt-release-hkzpx"
STEP: Checking that the Job readopts the Pod
Aug 26 14:40:08.211: INFO: Waiting up to 15m0s for pod "adopt-release-hkzpx" in namespace "job-650" to be "adopted"
Aug 26 14:40:08.237: INFO: Pod "adopt-release-hkzpx": Phase="Running", Reason="", readiness=true. Elapsed: 26.167082ms
Aug 26 14:40:10.244: INFO: Pod "adopt-release-hkzpx": Phase="Running", Reason="", readiness=true. Elapsed: 2.033167389s
Aug 26 14:40:10.245: INFO: Pod "adopt-release-hkzpx" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 26 14:40:10.759: INFO: Successfully updated pod "adopt-release-hkzpx"
STEP: Checking that the Job releases the Pod
Aug 26 14:40:10.759: INFO: Waiting up to 15m0s for pod "adopt-release-hkzpx" in namespace "job-650" to be "released"
Aug 26 14:40:10.779: INFO: Pod "adopt-release-hkzpx": Phase="Running", Reason="", readiness=true. Elapsed: 19.654906ms
Aug 26 14:40:10.779: INFO: Pod "adopt-release-hkzpx" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:40:10.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-650" for this suite.

• [SLOW TEST:11.758 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":87,"skipped":1472,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:40:11.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 14:40:18.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 14:40:20.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049617, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 14:40:22.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049618, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734049617, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 14:40:25.093: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:40:25.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1014-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:40:26.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2294" for this suite.
STEP: Destroying namespace "webhook-2294-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.707 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":88,"skipped":1487,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:40:26.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-6ff62619-f670-4a4a-b9a0-44b8d4c9cfb4
STEP: Creating a pod to test consume configMaps
Aug 26 14:40:27.362: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8" in namespace "projected-6568" to be "success or failure"
Aug 26 14:40:27.447: INFO: Pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8": Phase="Pending", Reason="", readiness=false. Elapsed: 84.525703ms
Aug 26 14:40:29.564: INFO: Pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20125493s
Aug 26 14:40:31.570: INFO: Pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207594302s
Aug 26 14:40:33.576: INFO: Pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213083466s
STEP: Saw pod success
Aug 26 14:40:33.576: INFO: Pod "pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8" satisfied condition "success or failure"
Aug 26 14:40:33.579: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 14:40:33.610: INFO: Waiting for pod pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8 to disappear
Aug 26 14:40:33.654: INFO: Pod pod-projected-configmaps-0521f871-fa7b-42b8-ab8c-75c991e781a8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:40:33.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6568" for this suite.

• [SLOW TEST:6.832 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1520,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:40:33.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:40:33.924: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7b11afdf-7374-4199-bcdf-2c2a3324372b" in namespace "security-context-test-4160" to be "success or failure"
Aug 26 14:40:33.936: INFO: Pod "alpine-nnp-false-7b11afdf-7374-4199-bcdf-2c2a3324372b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.346282ms
Aug 26 14:40:36.132: INFO: Pod "alpine-nnp-false-7b11afdf-7374-4199-bcdf-2c2a3324372b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207409373s
Aug 26 14:40:38.167: INFO: Pod "alpine-nnp-false-7b11afdf-7374-4199-bcdf-2c2a3324372b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.242157138s
Aug 26 14:40:38.167: INFO: Pod "alpine-nnp-false-7b11afdf-7374-4199-bcdf-2c2a3324372b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:40:38.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4160" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1533,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:40:38.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:40:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5753" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":91,"skipped":1536,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:40:39.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 26 14:40:40.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3400'
Aug 26 14:40:42.105: INFO: stderr: ""
Aug 26 14:40:42.105: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 14:40:42.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:40:43.268: INFO: stderr: ""
Aug 26 14:40:43.269: INFO: stdout: "update-demo-nautilus-bbgmb update-demo-nautilus-whkk4 "
Aug 26 14:40:43.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbgmb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:40:44.581: INFO: stderr: ""
Aug 26 14:40:44.581: INFO: stdout: ""
Aug 26 14:40:44.581: INFO: update-demo-nautilus-bbgmb is created but not running
Aug 26 14:40:49.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:40:50.711: INFO: stderr: ""
Aug 26 14:40:50.711: INFO: stdout: "update-demo-nautilus-bbgmb update-demo-nautilus-whkk4 "
Aug 26 14:40:50.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbgmb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:40:51.920: INFO: stderr: ""
Aug 26 14:40:51.920: INFO: stdout: "true"
Aug 26 14:40:51.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbgmb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:40:53.058: INFO: stderr: ""
Aug 26 14:40:53.058: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:40:53.058: INFO: validating pod update-demo-nautilus-bbgmb
Aug 26 14:40:53.082: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:40:53.082: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:40:53.083: INFO: update-demo-nautilus-bbgmb is verified up and running
Aug 26 14:40:53.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:40:54.261: INFO: stderr: ""
Aug 26 14:40:54.261: INFO: stdout: "true"
Aug 26 14:40:54.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:40:55.384: INFO: stderr: ""
Aug 26 14:40:55.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:40:55.385: INFO: validating pod update-demo-nautilus-whkk4
Aug 26 14:40:55.528: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:40:55.528: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:40:55.528: INFO: update-demo-nautilus-whkk4 is verified up and running
STEP: scaling down the replication controller
Aug 26 14:40:55.540: INFO: scanned /root for discovery docs: 
Aug 26 14:40:55.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3400'
Aug 26 14:40:56.755: INFO: stderr: ""
Aug 26 14:40:56.756: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 14:40:56.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:40:57.936: INFO: stderr: ""
Aug 26 14:40:57.936: INFO: stdout: "update-demo-nautilus-bbgmb update-demo-nautilus-whkk4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 26 14:41:02.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:41:04.142: INFO: stderr: ""
Aug 26 14:41:04.142: INFO: stdout: "update-demo-nautilus-whkk4 "
Aug 26 14:41:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:05.266: INFO: stderr: ""
Aug 26 14:41:05.266: INFO: stdout: "true"
Aug 26 14:41:05.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:06.684: INFO: stderr: ""
Aug 26 14:41:06.685: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:41:06.685: INFO: validating pod update-demo-nautilus-whkk4
Aug 26 14:41:06.754: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:41:06.754: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:41:06.754: INFO: update-demo-nautilus-whkk4 is verified up and running
STEP: scaling up the replication controller
Aug 26 14:41:06.763: INFO: scanned /root for discovery docs: 
Aug 26 14:41:06.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3400'
Aug 26 14:41:09.071: INFO: stderr: ""
Aug 26 14:41:09.071: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 14:41:09.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:41:10.465: INFO: stderr: ""
Aug 26 14:41:10.465: INFO: stdout: "update-demo-nautilus-qzqnc update-demo-nautilus-whkk4 "
Aug 26 14:41:10.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qzqnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:11.731: INFO: stderr: ""
Aug 26 14:41:11.731: INFO: stdout: ""
Aug 26 14:41:11.731: INFO: update-demo-nautilus-qzqnc is created but not running
Aug 26 14:41:16.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3400'
Aug 26 14:41:17.921: INFO: stderr: ""
Aug 26 14:41:17.921: INFO: stdout: "update-demo-nautilus-qzqnc update-demo-nautilus-whkk4 "
Aug 26 14:41:17.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qzqnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:19.050: INFO: stderr: ""
Aug 26 14:41:19.050: INFO: stdout: "true"
Aug 26 14:41:19.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qzqnc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:20.207: INFO: stderr: ""
Aug 26 14:41:20.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:41:20.208: INFO: validating pod update-demo-nautilus-qzqnc
Aug 26 14:41:20.214: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:41:20.214: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:41:20.214: INFO: update-demo-nautilus-qzqnc is verified up and running
Aug 26 14:41:20.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:21.355: INFO: stderr: ""
Aug 26 14:41:21.356: INFO: stdout: "true"
Aug 26 14:41:21.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whkk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3400'
Aug 26 14:41:22.651: INFO: stderr: ""
Aug 26 14:41:22.651: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 14:41:22.652: INFO: validating pod update-demo-nautilus-whkk4
Aug 26 14:41:22.657: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 14:41:22.657: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 14:41:22.657: INFO: update-demo-nautilus-whkk4 is verified up and running
STEP: using delete to clean up resources
Aug 26 14:41:22.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3400'
Aug 26 14:41:23.877: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 14:41:23.877: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 26 14:41:23.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3400'
Aug 26 14:41:25.065: INFO: stderr: "No resources found in kubectl-3400 namespace.\n"
Aug 26 14:41:25.065: INFO: stdout: ""
Aug 26 14:41:25.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3400 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 14:41:26.420: INFO: stderr: ""
Aug 26 14:41:26.420: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:41:26.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3400" for this suite.

• [SLOW TEST:47.389 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":92,"skipped":1537,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:41:27.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:41:33.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2380" for this suite.

• [SLOW TEST:6.959 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1552,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:41:34.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8733
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8733 to expose endpoints map[]
Aug 26 14:41:34.129: INFO: successfully validated that service multi-endpoint-test in namespace services-8733 exposes endpoints map[] (7.639586ms elapsed)
STEP: Creating pod pod1 in namespace services-8733
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8733 to expose endpoints map[pod1:[100]]
Aug 26 14:41:38.289: INFO: successfully validated that service multi-endpoint-test in namespace services-8733 exposes endpoints map[pod1:[100]] (4.15248306s elapsed)
STEP: Creating pod pod2 in namespace services-8733
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8733 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 26 14:41:42.891: INFO: successfully validated that service multi-endpoint-test in namespace services-8733 exposes endpoints map[pod1:[100] pod2:[101]] (4.594045939s elapsed)
STEP: Deleting pod pod1 in namespace services-8733
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8733 to expose endpoints map[pod2:[101]]
Aug 26 14:41:42.966: INFO: successfully validated that service multi-endpoint-test in namespace services-8733 exposes endpoints map[pod2:[101]] (67.640645ms elapsed)
STEP: Deleting pod pod2 in namespace services-8733
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8733 to expose endpoints map[]
Aug 26 14:41:43.371: INFO: successfully validated that service multi-endpoint-test in namespace services-8733 exposes endpoints map[] (398.331036ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:41:44.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8733" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.076 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":94,"skipped":1559,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:41:45.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:03.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8672" for this suite.
STEP: Destroying namespace "nsdeletetest-4090" for this suite.
Aug 26 14:42:03.319: INFO: Namespace nsdeletetest-4090 was already deleted
STEP: Destroying namespace "nsdeletetest-5778" for this suite.

• [SLOW TEST:18.241 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":95,"skipped":1559,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:03.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8098/configmap-test-76fc974b-ac0a-42bc-aecf-506967f51515
STEP: Creating a pod to test consume configMaps
Aug 26 14:42:03.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248" in namespace "configmap-8098" to be "success or failure"
Aug 26 14:42:03.617: INFO: Pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248": Phase="Pending", Reason="", readiness=false. Elapsed: 49.698874ms
Aug 26 14:42:05.627: INFO: Pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059988045s
Aug 26 14:42:07.894: INFO: Pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326543057s
Aug 26 14:42:09.983: INFO: Pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416326672s
STEP: Saw pod success
Aug 26 14:42:09.984: INFO: Pod "pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248" satisfied condition "success or failure"
Aug 26 14:42:10.354: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248 container env-test: 
STEP: delete the pod
Aug 26 14:42:11.441: INFO: Waiting for pod pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248 to disappear
Aug 26 14:42:11.554: INFO: Pod pod-configmaps-91ec824f-408c-4074-b22c-83eb4644a248 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:11.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8098" for this suite.

• [SLOW TEST:8.239 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1561,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:11.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:18.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1894" for this suite.

• [SLOW TEST:6.714 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":97,"skipped":1563,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:18.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:42:46.446: INFO: Container started at 2020-08-26 14:42:23 +0000 UTC, pod became ready at 2020-08-26 14:42:45 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:46.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5486" for this suite.

• [SLOW TEST:28.176 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1583,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:46.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 26 14:42:46.624: INFO: Waiting up to 5m0s for pod "pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5" in namespace "emptydir-2247" to be "success or failure"
Aug 26 14:42:46.634: INFO: Pod "pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.943939ms
Aug 26 14:42:48.639: INFO: Pod "pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015263185s
Aug 26 14:42:50.644: INFO: Pod "pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020381008s
STEP: Saw pod success
Aug 26 14:42:50.645: INFO: Pod "pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5" satisfied condition "success or failure"
Aug 26 14:42:50.648: INFO: Trying to get logs from node jerma-worker2 pod pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5 container test-container: 
STEP: delete the pod
Aug 26 14:42:50.676: INFO: Waiting for pod pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5 to disappear
Aug 26 14:42:50.681: INFO: Pod pod-fe198da5-2cf2-4050-b11e-fdedc4fb9bd5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:50.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2247" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1600,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:50.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 14:42:50.804: INFO: Waiting up to 5m0s for pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904" in namespace "downward-api-3100" to be "success or failure"
Aug 26 14:42:50.827: INFO: Pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904": Phase="Pending", Reason="", readiness=false. Elapsed: 22.186263ms
Aug 26 14:42:52.833: INFO: Pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028717975s
Aug 26 14:42:54.838: INFO: Pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904": Phase="Running", Reason="", readiness=true. Elapsed: 4.034065163s
Aug 26 14:42:56.843: INFO: Pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03857719s
STEP: Saw pod success
Aug 26 14:42:56.843: INFO: Pod "downward-api-d30c0e79-687e-4177-8b73-419b45c13904" satisfied condition "success or failure"
Aug 26 14:42:56.847: INFO: Trying to get logs from node jerma-worker pod downward-api-d30c0e79-687e-4177-8b73-419b45c13904 container dapi-container: 
STEP: delete the pod
Aug 26 14:42:56.918: INFO: Waiting for pod downward-api-d30c0e79-687e-4177-8b73-419b45c13904 to disappear
Aug 26 14:42:56.922: INFO: Pod downward-api-d30c0e79-687e-4177-8b73-419b45c13904 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:42:56.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3100" for this suite.

• [SLOW TEST:6.238 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1621,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:42:56.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 14:42:56.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242" in namespace "projected-7400" to be "success or failure"
Aug 26 14:42:57.006: INFO: Pod "downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242": Phase="Pending", Reason="", readiness=false. Elapsed: 7.372331ms
Aug 26 14:42:59.029: INFO: Pod "downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029619175s
Aug 26 14:43:01.079: INFO: Pod "downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080278002s
STEP: Saw pod success
Aug 26 14:43:01.079: INFO: Pod "downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242" satisfied condition "success or failure"
Aug 26 14:43:01.084: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242 container client-container: 
STEP: delete the pod
Aug 26 14:43:01.517: INFO: Waiting for pod downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242 to disappear
Aug 26 14:43:01.557: INFO: Pod downwardapi-volume-b2f7bde7-6c26-49d6-b2f7-5e63b046b242 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:01.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7400" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:01.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:09.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6795" for this suite.

• [SLOW TEST:8.025 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":102,"skipped":1652,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:09.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:43:09.891: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"10b9e158-f82a-4aa2-a40d-79d7fb4a1018", Controller:(*bool)(0x81e03a2), BlockOwnerDeletion:(*bool)(0x81e03a3)}}
Aug 26 14:43:09.914: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7d176b78-1c22-4359-b958-0047ff835791", Controller:(*bool)(0x81e056a), BlockOwnerDeletion:(*bool)(0x81e056b)}}
Aug 26 14:43:09.929: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"91e43f21-e2c5-4db1-82b2-628f534fdfd1", Controller:(*bool)(0x8840982), BlockOwnerDeletion:(*bool)(0x8840983)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:15.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2630" for this suite.

• [SLOW TEST:5.442 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":103,"skipped":1665,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:15.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0826 14:43:47.075339       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 14:43:47.076: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:47.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1133" for this suite.

• [SLOW TEST:31.953 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":104,"skipped":1708,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:47.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:47.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7078" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1712,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:47.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 14:43:47.763: INFO: Waiting up to 5m0s for pod "pod-c4032a34-1863-465a-a535-81655ef52833" in namespace "emptydir-5732" to be "success or failure"
Aug 26 14:43:47.781: INFO: Pod "pod-c4032a34-1863-465a-a535-81655ef52833": Phase="Pending", Reason="", readiness=false. Elapsed: 17.087755ms
Aug 26 14:43:50.140: INFO: Pod "pod-c4032a34-1863-465a-a535-81655ef52833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376997846s
Aug 26 14:43:52.231: INFO: Pod "pod-c4032a34-1863-465a-a535-81655ef52833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467312431s
Aug 26 14:43:54.236: INFO: Pod "pod-c4032a34-1863-465a-a535-81655ef52833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.472863602s
STEP: Saw pod success
Aug 26 14:43:54.237: INFO: Pod "pod-c4032a34-1863-465a-a535-81655ef52833" satisfied condition "success or failure"
Aug 26 14:43:54.255: INFO: Trying to get logs from node jerma-worker pod pod-c4032a34-1863-465a-a535-81655ef52833 container test-container: 
STEP: delete the pod
Aug 26 14:43:55.045: INFO: Waiting for pod pod-c4032a34-1863-465a-a535-81655ef52833 to disappear
Aug 26 14:43:55.080: INFO: Pod pod-c4032a34-1863-465a-a535-81655ef52833 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:43:55.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5732" for this suite.

• [SLOW TEST:7.760 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1745,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:43:55.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-8474
STEP: creating replication controller nodeport-test in namespace services-8474
I0826 14:43:55.996501       7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8474, replica count: 2
I0826 14:43:59.047912       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 14:44:02.048712       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 14:44:05.052127       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 14:44:05.052: INFO: Creating new exec pod
Aug 26 14:44:10.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8474 execpodlvppp -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 26 14:44:12.228: INFO: stderr: "I0826 14:44:12.145377    1357 log.go:172] (0x2c64000) (0x2c64070) Create stream\nI0826 14:44:12.146716    1357 log.go:172] (0x2c64000) (0x2c64070) Stream added, broadcasting: 1\nI0826 14:44:12.155297    1357 log.go:172] (0x2c64000) Reply frame received for 1\nI0826 14:44:12.155845    1357 log.go:172] (0x2c64000) (0x2b7ca10) Create stream\nI0826 14:44:12.155912    1357 log.go:172] (0x2c64000) (0x2b7ca10) Stream added, broadcasting: 3\nI0826 14:44:12.157122    1357 log.go:172] (0x2c64000) Reply frame received for 3\nI0826 14:44:12.157302    1357 log.go:172] (0x2c64000) (0x24b01c0) Create stream\nI0826 14:44:12.157349    1357 log.go:172] (0x2c64000) (0x24b01c0) Stream added, broadcasting: 5\nI0826 14:44:12.158318    1357 log.go:172] (0x2c64000) Reply frame received for 5\nI0826 14:44:12.213650    1357 log.go:172] (0x2c64000) Data frame received for 5\nI0826 14:44:12.213858    1357 log.go:172] (0x2c64000) Data frame received for 3\nI0826 14:44:12.213981    1357 log.go:172] (0x2b7ca10) (3) Data frame handling\nI0826 14:44:12.214051    1357 log.go:172] (0x24b01c0) (5) Data frame handling\nI0826 14:44:12.214539    1357 log.go:172] (0x2c64000) Data frame received for 1\nI0826 14:44:12.214614    1357 log.go:172] (0x2c64070) (1) Data frame handling\nI0826 14:44:12.215258    1357 log.go:172] (0x24b01c0) (5) Data frame sent\nI0826 14:44:12.215429    1357 log.go:172] (0x2c64070) (1) Data frame sent\nI0826 14:44:12.215554    1357 log.go:172] (0x2c64000) Data frame received for 5\nI0826 14:44:12.215618    1357 log.go:172] (0x24b01c0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0826 14:44:12.216927    1357 log.go:172] (0x24b01c0) (5) Data frame sent\nI0826 14:44:12.216978    1357 log.go:172] (0x2c64000) Data frame received for 5\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0826 14:44:12.218124    1357 log.go:172] (0x2c64000) (0x2c64070) Stream removed, broadcasting: 1\nI0826 14:44:12.218656    1357 log.go:172] (0x24b01c0) (5) Data frame handling\nI0826 14:44:12.218878    1357 log.go:172] (0x2c64000) Go away received\nI0826 14:44:12.220548    1357 log.go:172] (0x2c64000) (0x2c64070) Stream removed, broadcasting: 1\nI0826 14:44:12.220659    1357 log.go:172] (0x2c64000) (0x2b7ca10) Stream removed, broadcasting: 3\nI0826 14:44:12.220911    1357 log.go:172] (0x2c64000) (0x24b01c0) Stream removed, broadcasting: 5\n"
Aug 26 14:44:12.229: INFO: stdout: ""
Aug 26 14:44:12.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8474 execpodlvppp -- /bin/sh -x -c nc -zv -t -w 2 10.101.103.166 80'
Aug 26 14:44:15.930: INFO: stderr: "I0826 14:44:15.808258    1379 log.go:172] (0x2b9c620) (0x2b9c690) Create stream\nI0826 14:44:15.810162    1379 log.go:172] (0x2b9c620) (0x2b9c690) Stream added, broadcasting: 1\nI0826 14:44:15.826526    1379 log.go:172] (0x2b9c620) Reply frame received for 1\nI0826 14:44:15.827409    1379 log.go:172] (0x2b9c620) (0x24b49a0) Create stream\nI0826 14:44:15.827526    1379 log.go:172] (0x2b9c620) (0x24b49a0) Stream added, broadcasting: 3\nI0826 14:44:15.830212    1379 log.go:172] (0x2b9c620) Reply frame received for 3\nI0826 14:44:15.830563    1379 log.go:172] (0x2b9c620) (0x2706ee0) Create stream\nI0826 14:44:15.830666    1379 log.go:172] (0x2b9c620) (0x2706ee0) Stream added, broadcasting: 5\nI0826 14:44:15.832666    1379 log.go:172] (0x2b9c620) Reply frame received for 5\nI0826 14:44:15.906771    1379 log.go:172] (0x2b9c620) Data frame received for 5\nI0826 14:44:15.907001    1379 log.go:172] (0x2706ee0) (5) Data frame handling\nI0826 14:44:15.907325    1379 log.go:172] (0x2b9c620) Data frame received for 3\n+ nc -zv -t -w 2 10.101.103.166 80\nConnection to 10.101.103.166 80 port [tcp/http] succeeded!\nI0826 14:44:15.907571    1379 log.go:172] (0x24b49a0) (3) Data frame handling\nI0826 14:44:15.907772    1379 log.go:172] (0x2706ee0) (5) Data frame sent\nI0826 14:44:15.907961    1379 log.go:172] (0x2b9c620) Data frame received for 5\nI0826 14:44:15.908038    1379 log.go:172] (0x2706ee0) (5) Data frame handling\nI0826 14:44:15.908310    1379 log.go:172] (0x2b9c620) Data frame received for 1\nI0826 14:44:15.908487    1379 log.go:172] (0x2b9c690) (1) Data frame handling\nI0826 14:44:15.908672    1379 log.go:172] (0x2b9c690) (1) Data frame sent\nI0826 14:44:15.909494    1379 log.go:172] (0x2b9c620) (0x2b9c690) Stream removed, broadcasting: 1\nI0826 14:44:15.911676    1379 log.go:172] (0x2b9c620) Go away received\nI0826 14:44:15.914178    1379 log.go:172] (0x2b9c620) (0x2b9c690) Stream removed, broadcasting: 1\nI0826 14:44:15.914386    1379 log.go:172] (0x2b9c620) (0x24b49a0) Stream removed, broadcasting: 3\nI0826 14:44:15.914551    1379 log.go:172] (0x2b9c620) (0x2706ee0) Stream removed, broadcasting: 5\n"
Aug 26 14:44:15.931: INFO: stdout: ""
Aug 26 14:44:15.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8474 execpodlvppp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31631'
Aug 26 14:44:34.100: INFO: stderr: "I0826 14:44:33.982485    1404 log.go:172] (0x2b05960) (0x2b059d0) Create stream\nI0826 14:44:33.985054    1404 log.go:172] (0x2b05960) (0x2b059d0) Stream added, broadcasting: 1\nI0826 14:44:34.004237    1404 log.go:172] (0x2b05960) Reply frame received for 1\nI0826 14:44:34.005026    1404 log.go:172] (0x2b05960) (0x2822070) Create stream\nI0826 14:44:34.005123    1404 log.go:172] (0x2b05960) (0x2822070) Stream added, broadcasting: 3\nI0826 14:44:34.006680    1404 log.go:172] (0x2b05960) Reply frame received for 3\nI0826 14:44:34.006967    1404 log.go:172] (0x2b05960) (0x24b4cb0) Create stream\nI0826 14:44:34.007058    1404 log.go:172] (0x2b05960) (0x24b4cb0) Stream added, broadcasting: 5\nI0826 14:44:34.008176    1404 log.go:172] (0x2b05960) Reply frame received for 5\nI0826 14:44:34.077304    1404 log.go:172] (0x2b05960) Data frame received for 3\nI0826 14:44:34.077600    1404 log.go:172] (0x2b05960) Data frame received for 1\nI0826 14:44:34.078059    1404 log.go:172] (0x2b05960) Data frame received for 5\nI0826 14:44:34.078330    1404 log.go:172] (0x24b4cb0) (5) Data frame handling\nI0826 14:44:34.078652    1404 log.go:172] (0x2b059d0) (1) Data frame handling\nI0826 14:44:34.078946    1404 log.go:172] (0x2822070) (3) Data frame handling\nI0826 14:44:34.080269    1404 log.go:172] (0x24b4cb0) (5) Data frame sent\nI0826 14:44:34.080559    1404 log.go:172] (0x2b05960) Data frame received for 5\nI0826 14:44:34.080706    1404 log.go:172] (0x24b4cb0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31631\nConnection to 172.18.0.6 31631 port [tcp/31631] succeeded!\nI0826 14:44:34.081672    1404 log.go:172] (0x2b059d0) (1) Data frame sent\nI0826 14:44:34.082862    1404 log.go:172] (0x2b05960) (0x2b059d0) Stream removed, broadcasting: 1\nI0826 14:44:34.083981    1404 log.go:172] (0x2b05960) Go away received\nI0826 14:44:34.086166    1404 log.go:172] (0x2b05960) (0x2b059d0) Stream removed, broadcasting: 1\nI0826 14:44:34.086375    1404 log.go:172] (0x2b05960) (0x2822070) Stream removed, broadcasting: 3\nI0826 14:44:34.086572    1404 log.go:172] (0x2b05960) (0x24b4cb0) Stream removed, broadcasting: 5\n"
Aug 26 14:44:34.101: INFO: stdout: ""
Aug 26 14:44:34.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8474 execpodlvppp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31631'
Aug 26 14:44:35.494: INFO: stderr: "I0826 14:44:35.385858    1442 log.go:172] (0x288c070) (0x288c0e0) Create stream\nI0826 14:44:35.388540    1442 log.go:172] (0x288c070) (0x288c0e0) Stream added, broadcasting: 1\nI0826 14:44:35.400017    1442 log.go:172] (0x288c070) Reply frame received for 1\nI0826 14:44:35.400826    1442 log.go:172] (0x288c070) (0x2944070) Create stream\nI0826 14:44:35.400924    1442 log.go:172] (0x288c070) (0x2944070) Stream added, broadcasting: 3\nI0826 14:44:35.402526    1442 log.go:172] (0x288c070) Reply frame received for 3\nI0826 14:44:35.402910    1442 log.go:172] (0x288c070) (0x2944230) Create stream\nI0826 14:44:35.403004    1442 log.go:172] (0x288c070) (0x2944230) Stream added, broadcasting: 5\nI0826 14:44:35.404495    1442 log.go:172] (0x288c070) Reply frame received for 5\nI0826 14:44:35.474802    1442 log.go:172] (0x288c070) Data frame received for 3\nI0826 14:44:35.475157    1442 log.go:172] (0x288c070) Data frame received for 5\nI0826 14:44:35.475322    1442 log.go:172] (0x288c070) Data frame received for 1\nI0826 14:44:35.475417    1442 log.go:172] (0x288c0e0) (1) Data frame handling\nI0826 14:44:35.475529    1442 log.go:172] (0x2944230) (5) Data frame handling\nI0826 14:44:35.475737    1442 log.go:172] (0x2944070) (3) Data frame handling\nI0826 14:44:35.475993    1442 log.go:172] (0x2944230) (5) Data frame sent\nI0826 14:44:35.476399    1442 log.go:172] (0x288c070) Data frame received for 5\nI0826 14:44:35.476489    1442 log.go:172] (0x2944230) (5) Data frame handling\nI0826 14:44:35.476620    1442 log.go:172] (0x288c0e0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31631\nConnection to 172.18.0.3 31631 port [tcp/31631] succeeded!\nI0826 14:44:35.477798    1442 log.go:172] (0x288c070) (0x288c0e0) Stream removed, broadcasting: 1\nI0826 14:44:35.479782    1442 log.go:172] (0x288c070) Go away received\nI0826 14:44:35.481889    1442 log.go:172] (0x288c070) (0x288c0e0) Stream removed, broadcasting: 1\nI0826 14:44:35.482109    1442 log.go:172] (0x288c070) (0x2944070) Stream removed, broadcasting: 3\nI0826 14:44:35.482291    1442 log.go:172] (0x288c070) (0x2944230) Stream removed, broadcasting: 5\n"
Aug 26 14:44:35.494: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:44:35.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8474" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:40.364 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":107,"skipped":1764,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:44:35.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-66100e7c-9a97-4e49-8d13-424b9aaf77b1
STEP: Creating a pod to test consume secrets
Aug 26 14:44:37.586: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb" in namespace "projected-1845" to be "success or failure"
Aug 26 14:44:37.670: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 83.79596ms
Aug 26 14:44:39.955: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36847368s
Aug 26 14:44:42.062: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475885709s
Aug 26 14:44:44.327: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb": Phase="Running", Reason="", readiness=true. Elapsed: 6.740000191s
Aug 26 14:44:46.592: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.005028637s
STEP: Saw pod success
Aug 26 14:44:46.592: INFO: Pod "pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb" satisfied condition "success or failure"
Aug 26 14:44:46.806: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 14:44:47.281: INFO: Waiting for pod pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb to disappear
Aug 26 14:44:47.547: INFO: Pod pod-projected-secrets-3cc8da2a-0df4-4934-87b3-74c001d8c9cb no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:44:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1845" for this suite.

• [SLOW TEST:11.954 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1767,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:44:47.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-99740622-0af6-4326-80ab-3e98df7941cc
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:44:54.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7575" for this suite.

• [SLOW TEST:6.739 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1773,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:44:54.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-7d68256a-5c37-4956-8138-62525af05074
STEP: Creating a pod to test consume secrets
Aug 26 14:44:54.667: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf" in namespace "projected-5087" to be "success or failure"
Aug 26 14:44:54.763: INFO: Pod "pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf": Phase="Pending", Reason="", readiness=false. Elapsed: 96.228929ms
Aug 26 14:44:56.770: INFO: Pod "pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10328141s
Aug 26 14:44:58.778: INFO: Pod "pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110848041s
STEP: Saw pod success
Aug 26 14:44:58.778: INFO: Pod "pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf" satisfied condition "success or failure"
Aug 26 14:44:58.782: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 14:44:58.838: INFO: Waiting for pod pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf to disappear
Aug 26 14:44:58.855: INFO: Pod pod-projected-secrets-c839cbaa-ec2c-498a-ad4e-5fc817c032cf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:44:58.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5087" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1774,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:44:58.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0826 14:45:02.097882       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 14:45:02.098: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:02.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5136" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":111,"skipped":1784,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:02.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 26 14:45:02.359: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6054" to be "success or failure"
Aug 26 14:45:02.424: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 65.712203ms
Aug 26 14:45:04.699: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340335575s
Aug 26 14:45:06.794: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435523328s
Aug 26 14:45:08.801: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441999144s
Aug 26 14:45:10.805: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446257187s
Aug 26 14:45:12.812: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.453154232s
Aug 26 14:45:14.820: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.460777626s
STEP: Saw pod success
Aug 26 14:45:14.820: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 26 14:45:14.825: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 26 14:45:14.849: INFO: Waiting for pod pod-host-path-test to disappear
Aug 26 14:45:14.852: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:14.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6054" for this suite.

• [SLOW TEST:12.609 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1851,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:14.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-40311b3b-f99d-4081-a379-999faa1c13a5
STEP: Creating a pod to test consume secrets
Aug 26 14:45:15.172: INFO: Waiting up to 5m0s for pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524" in namespace "secrets-8361" to be "success or failure"
Aug 26 14:45:15.178: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524": Phase="Pending", Reason="", readiness=false. Elapsed: 5.342284ms
Aug 26 14:45:17.273: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100121206s
Aug 26 14:45:19.416: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242940965s
Aug 26 14:45:22.129: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956238268s
Aug 26 14:45:24.136: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.963465751s
STEP: Saw pod success
Aug 26 14:45:24.137: INFO: Pod "pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524" satisfied condition "success or failure"
Aug 26 14:45:24.203: INFO: Trying to get logs from node jerma-worker pod pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524 container secret-volume-test: 
STEP: delete the pod
Aug 26 14:45:24.299: INFO: Waiting for pod pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524 to disappear
Aug 26 14:45:24.328: INFO: Pod pod-secrets-855e9a21-cd75-47ac-a0bb-0ca9ed050524 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:24.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8361" for this suite.
STEP: Destroying namespace "secret-namespace-3155" for this suite.

• [SLOW TEST:9.558 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1868,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:24.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 14:45:24.733: INFO: Waiting up to 5m0s for pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8" in namespace "emptydir-5525" to be "success or failure"
Aug 26 14:45:24.808: INFO: Pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 75.418764ms
Aug 26 14:45:26.851: INFO: Pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117574122s
Aug 26 14:45:28.856: INFO: Pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8": Phase="Running", Reason="", readiness=true. Elapsed: 4.122963883s
Aug 26 14:45:30.862: INFO: Pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12891816s
STEP: Saw pod success
Aug 26 14:45:30.862: INFO: Pod "pod-ab262f08-6dee-413b-8943-619dd4a0d3c8" satisfied condition "success or failure"
Aug 26 14:45:30.867: INFO: Trying to get logs from node jerma-worker pod pod-ab262f08-6dee-413b-8943-619dd4a0d3c8 container test-container: 
STEP: delete the pod
Aug 26 14:45:30.887: INFO: Waiting for pod pod-ab262f08-6dee-413b-8943-619dd4a0d3c8 to disappear
Aug 26 14:45:30.909: INFO: Pod pod-ab262f08-6dee-413b-8943-619dd4a0d3c8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:30.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5525" for this suite.

• [SLOW TEST:6.514 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1874,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:30.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-3fe44480-47a8-479f-900a-441cf65205d9
STEP: Creating configMap with name cm-test-opt-upd-fe9e5147-60fe-42dc-b236-255eb6bb097c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3fe44480-47a8-479f-900a-441cf65205d9
STEP: Updating configmap cm-test-opt-upd-fe9e5147-60fe-42dc-b236-255eb6bb097c
STEP: Creating configMap with name cm-test-opt-create-14297264-e107-466f-bcfa-90148d93e427
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:41.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3185" for this suite.

• [SLOW TEST:10.451 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1882,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:41.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 14:45:41.537: INFO: Waiting up to 5m0s for pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b" in namespace "downward-api-4357" to be "success or failure"
Aug 26 14:45:41.556: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.993632ms
Aug 26 14:45:43.617: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079519501s
Aug 26 14:45:45.849: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311307451s
Aug 26 14:45:47.877: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339740119s
Aug 26 14:45:50.309: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.770937047s
STEP: Saw pod success
Aug 26 14:45:50.309: INFO: Pod "downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b" satisfied condition "success or failure"
Aug 26 14:45:50.313: INFO: Trying to get logs from node jerma-worker2 pod downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b container dapi-container: 
STEP: delete the pod
Aug 26 14:45:51.022: INFO: Waiting for pod downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b to disappear
Aug 26 14:45:51.026: INFO: Pod downward-api-677a56a9-b987-4883-9209-ef8123e9bd6b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:45:51.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4357" for this suite.

• [SLOW TEST:9.750 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1885,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:45:51.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:46:09.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-129" for this suite.

• [SLOW TEST:18.502 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":117,"skipped":1886,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:46:09.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9513
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 14:46:10.892: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 14:46:46.262: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.66:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9513 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 14:46:46.262: INFO: >>> kubeConfig: /root/.kube/config
I0826 14:46:46.373468       7 log.go:172] (0x855d500) (0x855d5e0) Create stream
I0826 14:46:46.373917       7 log.go:172] (0x855d500) (0x855d5e0) Stream added, broadcasting: 1
I0826 14:46:46.383505       7 log.go:172] (0x855d500) Reply frame received for 1
I0826 14:46:46.383885       7 log.go:172] (0x855d500) (0x8986070) Create stream
I0826 14:46:46.384055       7 log.go:172] (0x855d500) (0x8986070) Stream added, broadcasting: 3
I0826 14:46:46.386489       7 log.go:172] (0x855d500) Reply frame received for 3
I0826 14:46:46.386947       7 log.go:172] (0x855d500) (0x7feea80) Create stream
I0826 14:46:46.387105       7 log.go:172] (0x855d500) (0x7feea80) Stream added, broadcasting: 5
I0826 14:46:46.391357       7 log.go:172] (0x855d500) Reply frame received for 5
I0826 14:46:46.463564       7 log.go:172] (0x855d500) Data frame received for 3
I0826 14:46:46.463784       7 log.go:172] (0x8986070) (3) Data frame handling
I0826 14:46:46.463895       7 log.go:172] (0x855d500) Data frame received for 5
I0826 14:46:46.464050       7 log.go:172] (0x7feea80) (5) Data frame handling
I0826 14:46:46.464163       7 log.go:172] (0x8986070) (3) Data frame sent
I0826 14:46:46.464258       7 log.go:172] (0x855d500) Data frame received for 3
I0826 14:46:46.464384       7 log.go:172] (0x8986070) (3) Data frame handling
I0826 14:46:46.464878       7 log.go:172] (0x855d500) Data frame received for 1
I0826 14:46:46.464991       7 log.go:172] (0x855d5e0) (1) Data frame handling
I0826 14:46:46.465113       7 log.go:172] (0x855d5e0) (1) Data frame sent
I0826 14:46:46.465231       7 log.go:172] (0x855d500) (0x855d5e0) Stream removed, broadcasting: 1
I0826 14:46:46.465346       7 log.go:172] (0x855d500) Go away received
I0826 14:46:46.465945       7 log.go:172] (0x855d500) (0x855d5e0) Stream removed, broadcasting: 1
I0826 14:46:46.466151       7 log.go:172] (0x855d500) (0x8986070) Stream removed, broadcasting: 3
I0826 14:46:46.466295       7 log.go:172] (0x855d500) (0x7feea80) Stream removed, broadcasting: 5
Aug 26 14:46:46.466: INFO: Found all expected endpoints: [netserver-0]
Aug 26 14:46:46.471: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.192:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9513 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 14:46:46.471: INFO: >>> kubeConfig: /root/.kube/config
I0826 14:46:46.572555       7 log.go:172] (0x78fb030) (0x78fb180) Create stream
I0826 14:46:46.572681       7 log.go:172] (0x78fb030) (0x78fb180) Stream added, broadcasting: 1
I0826 14:46:46.577006       7 log.go:172] (0x78fb030) Reply frame received for 1
I0826 14:46:46.577220       7 log.go:172] (0x78fb030) (0x7d558f0) Create stream
I0826 14:46:46.577312       7 log.go:172] (0x78fb030) (0x7d558f0) Stream added, broadcasting: 3
I0826 14:46:46.578535       7 log.go:172] (0x78fb030) Reply frame received for 3
I0826 14:46:46.578648       7 log.go:172] (0x78fb030) (0x9e737a0) Create stream
I0826 14:46:46.578709       7 log.go:172] (0x78fb030) (0x9e737a0) Stream added, broadcasting: 5
I0826 14:46:46.579739       7 log.go:172] (0x78fb030) Reply frame received for 5
I0826 14:46:46.630254       7 log.go:172] (0x78fb030) Data frame received for 5
I0826 14:46:46.630430       7 log.go:172] (0x9e737a0) (5) Data frame handling
I0826 14:46:46.630568       7 log.go:172] (0x78fb030) Data frame received for 3
I0826 14:46:46.630724       7 log.go:172] (0x7d558f0) (3) Data frame handling
I0826 14:46:46.630893       7 log.go:172] (0x7d558f0) (3) Data frame sent
I0826 14:46:46.631070       7 log.go:172] (0x78fb030) Data frame received for 3
I0826 14:46:46.631181       7 log.go:172] (0x7d558f0) (3) Data frame handling
I0826 14:46:46.631340       7 log.go:172] (0x78fb030) Data frame received for 1
I0826 14:46:46.631444       7 log.go:172] (0x78fb180) (1) Data frame handling
I0826 14:46:46.631542       7 log.go:172] (0x78fb180) (1) Data frame sent
I0826 14:46:46.631633       7 log.go:172] (0x78fb030) (0x78fb180) Stream removed, broadcasting: 1
I0826 14:46:46.631718       7 log.go:172] (0x78fb030) Go away received
I0826 14:46:46.631999       7 log.go:172] (0x78fb030) (0x78fb180) Stream removed, broadcasting: 1
I0826 14:46:46.632120       7 log.go:172] (0x78fb030) (0x7d558f0) Stream removed, broadcasting: 3
I0826 14:46:46.632185       7 log.go:172] (0x78fb030) (0x9e737a0) Stream removed, broadcasting: 5
Aug 26 14:46:46.632: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:46:46.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9513" for this suite.

• [SLOW TEST:37.023 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:46:46.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7673.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7673.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7673.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7673.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 14:47:01.535: INFO: DNS probes using dns-7673/dns-test-f7ad941c-8e7e-4ff9-8d58-76575a7bfa72 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:47:04.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7673" for this suite.

• [SLOW TEST:18.531 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":119,"skipped":1916,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:47:05.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 14:47:09.012: INFO: Waiting up to 5m0s for pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df" in namespace "downward-api-8514" to be "success or failure"
Aug 26 14:47:09.867: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Pending", Reason="", readiness=false. Elapsed: 854.710819ms
Aug 26 14:47:11.969: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.956987891s
Aug 26 14:47:14.255: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Pending", Reason="", readiness=false. Elapsed: 5.243366434s
Aug 26 14:47:16.912: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Pending", Reason="", readiness=false. Elapsed: 7.900368993s
Aug 26 14:47:19.855: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.842913382s
Aug 26 14:47:21.896: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Running", Reason="", readiness=true. Elapsed: 12.883840999s
Aug 26 14:47:23.902: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.890351503s
STEP: Saw pod success
Aug 26 14:47:23.902: INFO: Pod "downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df" satisfied condition "success or failure"
Aug 26 14:47:23.906: INFO: Trying to get logs from node jerma-worker pod downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df container dapi-container: 
STEP: delete the pod
Aug 26 14:47:23.963: INFO: Waiting for pod downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df to disappear
Aug 26 14:47:24.111: INFO: Pod downward-api-ee2feede-650a-43e6-b6a5-fa9c8b16e5df no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:47:24.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8514" for this suite.

• [SLOW TEST:18.918 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1924,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:47:24.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:47:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3407" for this suite.

• [SLOW TEST:19.623 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":121,"skipped":1953,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:47:43.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1756.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1756.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 14:47:59.058: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.063: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.066: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.070: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.082: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.086: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.090: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.093: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:47:59.101: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:04.108: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.112: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.116: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.120: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.292: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.479: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.484: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.533: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:04.695: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:09.108: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.112: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.117: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.121: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.148: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.153: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.157: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.161: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:09.169: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:14.106: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.110: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.113: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.115: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.123: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.126: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.129: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.132: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:14.137: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:19.609: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:19.613: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.089: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.269: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.273: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.276: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.279: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:20.284: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:24.377: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:24.466: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:24.807: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:24.861: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:25.530: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:25.536: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:25.541: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local from pod dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360: the server could not find the requested resource (get pods dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360)
Aug 26 14:48:25.549: INFO: Lookups using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1756.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1756.svc.cluster.local jessie_udp@dns-test-service-2.dns-1756.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1756.svc.cluster.local]

Aug 26 14:48:29.465: INFO: DNS probes using dns-1756/dns-test-a4ac35a0-48a9-4dec-8881-252262ec1360 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:48:30.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1756" for this suite.

• [SLOW TEST:46.826 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":122,"skipped":1973,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:48:30.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:48:30.805: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:48:32.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1658" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":123,"skipped":1992,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:48:32.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 14:48:33.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-321'
Aug 26 14:48:35.911: INFO: stderr: ""
Aug 26 14:48:35.912: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 26 14:48:35.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-321'
Aug 26 14:48:37.748: INFO: stderr: ""
Aug 26 14:48:37.749: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 14:48:38.756: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 14:48:38.757: INFO: Found 0 / 1
Aug 26 14:48:39.756: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 14:48:39.756: INFO: Found 0 / 1
Aug 26 14:48:40.756: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 14:48:40.757: INFO: Found 1 / 1
Aug 26 14:48:40.757: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 14:48:40.762: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 14:48:40.762: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 14:48:40.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-cc92p --namespace=kubectl-321'
Aug 26 14:48:42.116: INFO: stderr: ""
Aug 26 14:48:42.116: INFO: stdout: "Name:         agnhost-master-cc92p\nNamespace:    kubectl-321\nPriority:     0\nNode:         jerma-worker/172.18.0.6\nStart Time:   Wed, 26 Aug 2020 14:48:36 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.72\nIPs:\n  IP:           10.244.2.72\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://a293ee4acbc417a8461260f2589177a367ed4f7594f9e44fe04c401351a374a9\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 26 Aug 2020 14:48:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9zqw (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-j9zqw:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-j9zqw\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-321/agnhost-master-cc92p to jerma-worker\n  Normal  Pulled     5s         kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s         kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    3s         kubelet, jerma-worker  Started container agnhost-master\n"
Aug 26 14:48:42.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-321'
Aug 26 14:48:43.446: INFO: stderr: ""
Aug 26 14:48:43.446: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-321\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-cc92p\n"
Aug 26 14:48:43.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-321'
Aug 26 14:48:44.574: INFO: stderr: ""
Aug 26 14:48:44.574: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-321\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.97.214.143\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.72:6379\nSession Affinity:  None\nEvents:            \n"
Aug 26 14:48:44.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 26 14:48:45.822: INFO: stderr: ""
Aug 26 14:48:45.822: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 26 Aug 2020 14:48:43 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 26 Aug 2020 14:47:06 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 26 Aug 2020 14:47:06 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 26 Aug 2020 14:47:06 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 26 Aug 2020 14:47:06 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      11d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         11d\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 26 14:48:45.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-321'
Aug 26 14:48:46.942: INFO: stderr: ""
Aug 26 14:48:46.943: INFO: stdout: "Name:         kubectl-321\nLabels:       e2e-framework=kubectl\n              e2e-run=8d1f7caf-4170-474c-8408-2dd603ddf8f0\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:48:46.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-321" for this suite.

• [SLOW TEST:14.424 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":124,"skipped":1996,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:48:46.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 26 14:48:47.811: INFO: Pod name wrapped-volume-race-bae35aaf-613a-454f-bc56-5d72e3124b37: Found 0 pods out of 5
Aug 26 14:48:52.852: INFO: Pod name wrapped-volume-race-bae35aaf-613a-454f-bc56-5d72e3124b37: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bae35aaf-613a-454f-bc56-5d72e3124b37 in namespace emptydir-wrapper-2377, will wait for the garbage collector to delete the pods
Aug 26 14:49:13.185: INFO: Deleting ReplicationController wrapped-volume-race-bae35aaf-613a-454f-bc56-5d72e3124b37 took: 806.908973ms
Aug 26 14:49:14.186: INFO: Terminating ReplicationController wrapped-volume-race-bae35aaf-613a-454f-bc56-5d72e3124b37 pods took: 1.000898911s
STEP: Creating RC which spawns configmap-volume pods
Aug 26 14:49:33.923: INFO: Pod name wrapped-volume-race-ad2438f2-3265-4d2e-8aec-3c04ecb8cb11: Found 0 pods out of 5
Aug 26 14:49:38.934: INFO: Pod name wrapped-volume-race-ad2438f2-3265-4d2e-8aec-3c04ecb8cb11: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ad2438f2-3265-4d2e-8aec-3c04ecb8cb11 in namespace emptydir-wrapper-2377, will wait for the garbage collector to delete the pods
Aug 26 14:49:59.388: INFO: Deleting ReplicationController wrapped-volume-race-ad2438f2-3265-4d2e-8aec-3c04ecb8cb11 took: 368.247802ms
Aug 26 14:49:59.889: INFO: Terminating ReplicationController wrapped-volume-race-ad2438f2-3265-4d2e-8aec-3c04ecb8cb11 pods took: 501.038104ms
STEP: Creating RC which spawns configmap-volume pods
Aug 26 14:50:12.191: INFO: Pod name wrapped-volume-race-c29f03c0-1691-4f73-8866-834edefcc13b: Found 0 pods out of 5
Aug 26 14:50:17.206: INFO: Pod name wrapped-volume-race-c29f03c0-1691-4f73-8866-834edefcc13b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c29f03c0-1691-4f73-8866-834edefcc13b in namespace emptydir-wrapper-2377, will wait for the garbage collector to delete the pods
Aug 26 14:50:39.331: INFO: Deleting ReplicationController wrapped-volume-race-c29f03c0-1691-4f73-8866-834edefcc13b took: 7.66363ms
Aug 26 14:50:39.732: INFO: Terminating ReplicationController wrapped-volume-race-c29f03c0-1691-4f73-8866-834edefcc13b pods took: 400.716442ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:50:54.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2377" for this suite.

• [SLOW TEST:127.991 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":125,"skipped":2013,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:50:54.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088
Aug 26 14:50:55.304: INFO: Pod name my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088: Found 0 pods out of 1
Aug 26 14:51:00.337: INFO: Pod name my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088: Found 1 pods out of 1
Aug 26 14:51:00.337: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088" are running
Aug 26 14:51:00.366: INFO: Pod "my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088-6nqbp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:50:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:50:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:50:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 14:50:55 +0000 UTC Reason: Message:}])
Aug 26 14:51:00.367: INFO: Trying to dial the pod
Aug 26 14:51:05.407: INFO: Controller my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088: Got expected result from replica 1 [my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088-6nqbp]: "my-hostname-basic-e311d213-0b27-481d-9214-bf90f2e10088-6nqbp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:05.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7670" for this suite.

• [SLOW TEST:10.468 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":126,"skipped":2016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:05.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 14:51:05.541: INFO: Waiting up to 5m0s for pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4" in namespace "emptydir-7273" to be "success or failure"
Aug 26 14:51:05.554: INFO: Pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.86465ms
Aug 26 14:51:07.559: INFO: Pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017248629s
Aug 26 14:51:09.565: INFO: Pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4": Phase="Running", Reason="", readiness=true. Elapsed: 4.02337155s
Aug 26 14:51:11.570: INFO: Pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028622135s
STEP: Saw pod success
Aug 26 14:51:11.571: INFO: Pod "pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4" satisfied condition "success or failure"
Aug 26 14:51:11.575: INFO: Trying to get logs from node jerma-worker pod pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4 container test-container: 
STEP: delete the pod
Aug 26 14:51:11.607: INFO: Waiting for pod pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4 to disappear
Aug 26 14:51:11.624: INFO: Pod pod-9370d09c-8d5f-46c3-9164-2ef3f892aed4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:11.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7273" for this suite.

• [SLOW TEST:6.212 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2041,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:11.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 14:51:18.332: INFO: Successfully updated pod "labelsupdate36f5f1b4-b59d-4564-9e09-c5dea59deed1"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:21.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2301" for this suite.

• [SLOW TEST:9.646 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2062,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:21.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:22.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3228" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":129,"skipped":2066,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:22.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:40.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7461" for this suite.

• [SLOW TEST:18.430 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":130,"skipped":2071,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:40.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 14:51:46.132: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:51:46.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8785" for this suite.

• [SLOW TEST:5.545 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2084,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:51:46.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2460
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 26 14:51:47.032: INFO: Found 0 stateful pods, waiting for 3
Aug 26 14:51:57.110: INFO: Found 2 stateful pods, waiting for 3
Aug 26 14:52:07.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:52:07.040: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:52:07.040: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:52:07.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2460 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 14:52:08.505: INFO: stderr: "I0826 14:52:08.321983    1621 log.go:172] (0x29ae000) (0x29ae070) Create stream\nI0826 14:52:08.323964    1621 log.go:172] (0x29ae000) (0x29ae070) Stream added, broadcasting: 1\nI0826 14:52:08.340050    1621 log.go:172] (0x29ae000) Reply frame received for 1\nI0826 14:52:08.340502    1621 log.go:172] (0x29ae000) (0x25c60e0) Create stream\nI0826 14:52:08.340568    1621 log.go:172] (0x29ae000) (0x25c60e0) Stream added, broadcasting: 3\nI0826 14:52:08.341978    1621 log.go:172] (0x29ae000) Reply frame received for 3\nI0826 14:52:08.342278    1621 log.go:172] (0x29ae000) (0x271f1f0) Create stream\nI0826 14:52:08.342355    1621 log.go:172] (0x29ae000) (0x271f1f0) Stream added, broadcasting: 5\nI0826 14:52:08.343257    1621 log.go:172] (0x29ae000) Reply frame received for 5\nI0826 14:52:08.434426    1621 log.go:172] (0x29ae000) Data frame received for 5\nI0826 14:52:08.434607    1621 log.go:172] (0x271f1f0) (5) Data frame handling\nI0826 14:52:08.434914    1621 log.go:172] (0x271f1f0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 14:52:08.490811    1621 log.go:172] (0x29ae000) Data frame received for 3\nI0826 14:52:08.490942    1621 log.go:172] (0x25c60e0) (3) Data frame handling\nI0826 14:52:08.491057    1621 log.go:172] (0x29ae000) Data frame received for 5\nI0826 14:52:08.491209    1621 log.go:172] (0x271f1f0) (5) Data frame handling\nI0826 14:52:08.491360    1621 log.go:172] (0x25c60e0) (3) Data frame sent\nI0826 14:52:08.491548    1621 log.go:172] (0x29ae000) Data frame received for 3\nI0826 14:52:08.491666    1621 log.go:172] (0x25c60e0) (3) Data frame handling\nI0826 14:52:08.492019    1621 log.go:172] (0x29ae000) Data frame received for 1\nI0826 14:52:08.492088    1621 log.go:172] (0x29ae070) (1) Data frame handling\nI0826 14:52:08.492167    1621 log.go:172] (0x29ae070) (1) Data frame sent\nI0826 14:52:08.493658    1621 log.go:172] (0x29ae000) (0x29ae070) Stream removed, broadcasting: 1\nI0826 14:52:08.495264    1621 log.go:172] (0x29ae000) Go away received\nI0826 14:52:08.497390    1621 log.go:172] (0x29ae000) (0x29ae070) Stream removed, broadcasting: 1\nI0826 14:52:08.497613    1621 log.go:172] (0x29ae000) (0x25c60e0) Stream removed, broadcasting: 3\nI0826 14:52:08.497774    1621 log.go:172] (0x29ae000) (0x271f1f0) Stream removed, broadcasting: 5\n"
Aug 26 14:52:08.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 14:52:08.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 14:52:18.595: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 26 14:52:28.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2460 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 14:52:29.995: INFO: stderr: "I0826 14:52:29.901174    1645 log.go:172] (0x2c4aa10) (0x2c4aa80) Create stream\nI0826 14:52:29.905331    1645 log.go:172] (0x2c4aa10) (0x2c4aa80) Stream added, broadcasting: 1\nI0826 14:52:29.923634    1645 log.go:172] (0x2c4aa10) Reply frame received for 1\nI0826 14:52:29.924178    1645 log.go:172] (0x2c4aa10) (0x293e070) Create stream\nI0826 14:52:29.924259    1645 log.go:172] (0x2c4aa10) (0x293e070) Stream added, broadcasting: 3\nI0826 14:52:29.925652    1645 log.go:172] (0x2c4aa10) Reply frame received for 3\nI0826 14:52:29.925897    1645 log.go:172] (0x2c4aa10) (0x27e0310) Create stream\nI0826 14:52:29.925967    1645 log.go:172] (0x2c4aa10) (0x27e0310) Stream added, broadcasting: 5\nI0826 14:52:29.926953    1645 log.go:172] (0x2c4aa10) Reply frame received for 5\nI0826 14:52:29.973718    1645 log.go:172] (0x2c4aa10) Data frame received for 5\nI0826 14:52:29.974007    1645 log.go:172] (0x2c4aa10) Data frame received for 3\nI0826 14:52:29.974110    1645 log.go:172] (0x27e0310) (5) Data frame handling\nI0826 14:52:29.974285    1645 log.go:172] (0x293e070) (3) Data frame handling\nI0826 14:52:29.974458    1645 log.go:172] (0x2c4aa10) Data frame received for 1\nI0826 14:52:29.974538    1645 log.go:172] (0x2c4aa80) (1) Data frame handling\nI0826 14:52:29.975318    1645 log.go:172] (0x2c4aa80) (1) Data frame sent\nI0826 14:52:29.975499    1645 log.go:172] (0x293e070) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 14:52:29.976027    1645 log.go:172] (0x2c4aa10) Data frame received for 3\nI0826 14:52:29.976131    1645 log.go:172] (0x293e070) (3) Data frame handling\nI0826 14:52:29.976335    1645 log.go:172] (0x27e0310) (5) Data frame sent\nI0826 14:52:29.976404    1645 log.go:172] (0x2c4aa10) Data frame received for 5\nI0826 14:52:29.976456    1645 log.go:172] (0x27e0310) (5) Data frame handling\nI0826 14:52:29.978142    1645 log.go:172] (0x2c4aa10) (0x2c4aa80) Stream removed, broadcasting: 1\nI0826 14:52:29.978504    1645 log.go:172] (0x2c4aa10) Go away received\nI0826 14:52:29.981265    1645 log.go:172] (0x2c4aa10) (0x2c4aa80) Stream removed, broadcasting: 1\nI0826 14:52:29.981499    1645 log.go:172] (0x2c4aa10) (0x293e070) Stream removed, broadcasting: 3\nI0826 14:52:29.981680    1645 log.go:172] (0x2c4aa10) (0x27e0310) Stream removed, broadcasting: 5\n"
Aug 26 14:52:29.996: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 14:52:29.996: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 14:52:40.023: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:52:40.024: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:52:40.024: INFO: Waiting for Pod statefulset-2460/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:52:50.260: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:52:50.260: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:53:00.039: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:53:00.040: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Aug 26 14:53:10.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2460 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 14:53:12.209: INFO: stderr: "I0826 14:53:11.793415    1668 log.go:172] (0x2832070) (0x2715f80) Create stream\nI0826 14:53:11.798163    1668 log.go:172] (0x2832070) (0x2715f80) Stream added, broadcasting: 1\nI0826 14:53:11.805871    1668 log.go:172] (0x2832070) Reply frame received for 1\nI0826 14:53:11.806310    1668 log.go:172] (0x2832070) (0x2600230) Create stream\nI0826 14:53:11.806368    1668 log.go:172] (0x2832070) (0x2600230) Stream added, broadcasting: 3\nI0826 14:53:11.807404    1668 log.go:172] (0x2832070) Reply frame received for 3\nI0826 14:53:11.807594    1668 log.go:172] (0x2832070) (0x24b4f50) Create stream\nI0826 14:53:11.807707    1668 log.go:172] (0x2832070) (0x24b4f50) Stream added, broadcasting: 5\nI0826 14:53:11.808577    1668 log.go:172] (0x2832070) Reply frame received for 5\nI0826 14:53:11.862537    1668 log.go:172] (0x2832070) Data frame received for 5\nI0826 14:53:11.862704    1668 log.go:172] (0x24b4f50) (5) Data frame handling\nI0826 14:53:11.862995    1668 log.go:172] (0x24b4f50) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 14:53:12.181511    1668 log.go:172] (0x2832070) Data frame received for 3\nI0826 14:53:12.181817    1668 log.go:172] (0x2600230) (3) Data frame handling\nI0826 14:53:12.182062    1668 log.go:172] (0x2600230) (3) Data frame sent\nI0826 14:53:12.182256    1668 log.go:172] (0x2832070) Data frame received for 3\nI0826 14:53:12.182436    1668 log.go:172] (0x2600230) (3) Data frame handling\nI0826 14:53:12.183413    1668 log.go:172] (0x2832070) Data frame received for 5\nI0826 14:53:12.183690    1668 log.go:172] (0x24b4f50) (5) Data frame handling\nI0826 14:53:12.183889    1668 log.go:172] (0x2832070) Data frame received for 1\nI0826 14:53:12.184144    1668 log.go:172] (0x2715f80) (1) Data frame handling\nI0826 14:53:12.184309    1668 log.go:172] (0x2715f80) (1) Data frame sent\nI0826 14:53:12.185234    1668 log.go:172] (0x2832070) (0x2715f80) Stream removed, broadcasting: 1\nI0826 14:53:12.187843    1668 log.go:172] (0x2832070) (0x2715f80) Stream removed, broadcasting: 1\nI0826 14:53:12.188118    1668 log.go:172] (0x2832070) (0x2600230) Stream removed, broadcasting: 3\nI0826 14:53:12.190257    1668 log.go:172] (0x2832070) (0x24b4f50) Stream removed, broadcasting: 5\nI0826 14:53:12.192682    1668 log.go:172] (0x2832070) Go away received\n"
Aug 26 14:53:12.209: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 14:53:12.209: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 14:53:23.366: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 26 14:53:33.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2460 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 14:53:34.936: INFO: stderr: "I0826 14:53:34.848883    1692 log.go:172] (0x2cbc070) (0x2cbc0e0) Create stream\nI0826 14:53:34.853573    1692 log.go:172] (0x2cbc070) (0x2cbc0e0) Stream added, broadcasting: 1\nI0826 14:53:34.861519    1692 log.go:172] (0x2cbc070) Reply frame received for 1\nI0826 14:53:34.861962    1692 log.go:172] (0x2cbc070) (0x25e83f0) Create stream\nI0826 14:53:34.862022    1692 log.go:172] (0x2cbc070) (0x25e83f0) Stream added, broadcasting: 3\nI0826 14:53:34.863354    1692 log.go:172] (0x2cbc070) Reply frame received for 3\nI0826 14:53:34.863600    1692 log.go:172] (0x2cbc070) (0x2cbc2a0) Create stream\nI0826 14:53:34.863664    1692 log.go:172] (0x2cbc070) (0x2cbc2a0) Stream added, broadcasting: 5\nI0826 14:53:34.865099    1692 log.go:172] (0x2cbc070) Reply frame received for 5\nI0826 14:53:34.918175    1692 log.go:172] (0x2cbc070) Data frame received for 3\nI0826 14:53:34.918421    1692 log.go:172] (0x25e83f0) (3) Data frame handling\nI0826 14:53:34.918813    1692 log.go:172] (0x2cbc070) Data frame received for 5\nI0826 14:53:34.918902    1692 log.go:172] (0x2cbc2a0) (5) Data frame handling\nI0826 14:53:34.918990    1692 log.go:172] (0x25e83f0) (3) Data frame sent\nI0826 14:53:34.919301    1692 log.go:172] (0x2cbc2a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 14:53:34.919574    1692 log.go:172] (0x2cbc070) Data frame received for 3\nI0826 14:53:34.919705    1692 log.go:172] (0x25e83f0) (3) Data frame handling\nI0826 14:53:34.919794    1692 log.go:172] (0x2cbc070) Data frame received for 1\nI0826 14:53:34.919891    1692 log.go:172] (0x2cbc0e0) (1) Data frame handling\nI0826 14:53:34.920032    1692 log.go:172] (0x2cbc070) Data frame received for 5\nI0826 14:53:34.920127    1692 log.go:172] (0x2cbc2a0) (5) Data frame handling\nI0826 14:53:34.920256    1692 log.go:172] (0x2cbc0e0) (1) Data frame sent\nI0826 14:53:34.921180    1692 log.go:172] (0x2cbc070) (0x2cbc0e0) Stream removed, broadcasting: 1\nI0826 14:53:34.922620    1692 log.go:172] (0x2cbc070) Go away received\nI0826 14:53:34.924255    1692 log.go:172] (0x2cbc070) (0x2cbc0e0) Stream removed, broadcasting: 1\nI0826 14:53:34.924395    1692 log.go:172] (0x2cbc070) (0x25e83f0) Stream removed, broadcasting: 3\nI0826 14:53:34.924535    1692 log.go:172] (0x2cbc070) (0x2cbc2a0) Stream removed, broadcasting: 5\n"
Aug 26 14:53:34.937: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 14:53:34.937: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 14:53:44.968: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:53:44.969: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 14:53:44.969: INFO: Waiting for Pod statefulset-2460/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 14:53:54.981: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:53:54.981: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 14:54:05.129: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
Aug 26 14:54:05.129: INFO: Waiting for Pod statefulset-2460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 14:54:15.375: INFO: Waiting for StatefulSet statefulset-2460/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 14:54:25.344: INFO: Deleting all statefulset in ns statefulset-2460
Aug 26 14:54:25.349: INFO: Scaling statefulset ss2 to 0
Aug 26 14:55:05.552: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 14:55:05.555: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:55:05.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2460" for this suite.

• [SLOW TEST:199.319 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":132,"skipped":2098,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:55:05.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-37aebb8d-c49b-4a41-9d8d-044dbc50443d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-37aebb8d-c49b-4a41-9d8d-044dbc50443d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:56:29.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-697" for this suite.

• [SLOW TEST:84.063 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:56:29.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-347
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 26 14:56:29.975: INFO: Found 0 stateful pods, waiting for 3
Aug 26 14:56:39.983: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:56:39.983: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:56:39.983: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 14:56:49.987: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:56:49.987: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:56:49.987: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 14:56:50.027: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 26 14:57:00.149: INFO: Updating stateful set ss2
Aug 26 14:57:00.319: INFO: Waiting for Pod statefulset-347/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:57:10.333: INFO: Waiting for Pod statefulset-347/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 26 14:57:21.217: INFO: Found 2 stateful pods, waiting for 3
Aug 26 14:57:31.227: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:57:31.228: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:57:31.228: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 14:57:41.247: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:57:41.247: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 14:57:41.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 26 14:57:41.752: INFO: Updating stateful set ss2
Aug 26 14:57:42.523: INFO: Waiting for Pod statefulset-347/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:57:53.005: INFO: Updating stateful set ss2
Aug 26 14:57:53.396: INFO: Waiting for StatefulSet statefulset-347/ss2 to complete update
Aug 26 14:57:53.397: INFO: Waiting for Pod statefulset-347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:58:03.413: INFO: Waiting for StatefulSet statefulset-347/ss2 to complete update
Aug 26 14:58:03.413: INFO: Waiting for Pod statefulset-347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 14:58:14.013: INFO: Waiting for StatefulSet statefulset-347/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 14:58:23.823: INFO: Deleting all statefulset in ns statefulset-347
Aug 26 14:58:23.970: INFO: Scaling statefulset ss2 to 0
Aug 26 14:58:54.167: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 14:58:54.176: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:58:54.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-347" for this suite.

• [SLOW TEST:144.612 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":134,"skipped":2159,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:58:54.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:08.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2496" for this suite.

• [SLOW TEST:14.290 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":135,"skipped":2179,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:08.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-29ce2749-eed1-402d-960a-c0be9ee25986
STEP: Creating a pod to test consume secrets
Aug 26 14:59:10.767: INFO: Waiting up to 5m0s for pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c" in namespace "secrets-5963" to be "success or failure"
Aug 26 14:59:11.059: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c": Phase="Pending", Reason="", readiness=false. Elapsed: 290.778274ms
Aug 26 14:59:13.066: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297649941s
Aug 26 14:59:15.131: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363090643s
Aug 26 14:59:17.305: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.536915525s
Aug 26 14:59:19.599: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.830880841s
STEP: Saw pod success
Aug 26 14:59:19.599: INFO: Pod "pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c" satisfied condition "success or failure"
Aug 26 14:59:19.603: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c container secret-volume-test: 
STEP: delete the pod
Aug 26 14:59:19.658: INFO: Waiting for pod pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c to disappear
Aug 26 14:59:19.806: INFO: Pod pod-secrets-c2eab77a-52bc-4e2a-96bd-63662203d00c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:19.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5963" for this suite.

• [SLOW TEST:11.195 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:19.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:28.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5182" for this suite.
STEP: Destroying namespace "nsdeletetest-4958" for this suite.
Aug 26 14:59:28.067: INFO: Namespace nsdeletetest-4958 was already deleted
STEP: Destroying namespace "nsdeletetest-4091" for this suite.

• [SLOW TEST:8.252 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":137,"skipped":2235,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:28.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 26 14:59:28.421: INFO: Waiting up to 5m0s for pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b" in namespace "containers-7030" to be "success or failure"
Aug 26 14:59:28.427: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.749443ms
Aug 26 14:59:30.527: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106471219s
Aug 26 14:59:32.534: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113022407s
Aug 26 14:59:34.541: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b": Phase="Running", Reason="", readiness=true. Elapsed: 6.120064123s
Aug 26 14:59:36.548: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126858735s
STEP: Saw pod success
Aug 26 14:59:36.548: INFO: Pod "client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b" satisfied condition "success or failure"
Aug 26 14:59:36.562: INFO: Trying to get logs from node jerma-worker pod client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b container test-container: 
STEP: delete the pod
Aug 26 14:59:36.598: INFO: Waiting for pod client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b to disappear
Aug 26 14:59:36.616: INFO: Pod client-containers-1b65788b-c271-48d9-9cf6-4f92b0bfa48b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:36.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7030" for this suite.

• [SLOW TEST:8.593 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2241,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:36.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 26 14:59:36.850: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 26 14:59:41.856: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:41.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5580" for this suite.

• [SLOW TEST:5.361 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":139,"skipped":2252,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:42.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:48.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4488" for this suite.

• [SLOW TEST:7.463 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2280,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:49.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-212f0d8f-f019-4efa-8e43-34bd39bf3c1a
STEP: Creating a pod to test consume secrets
Aug 26 14:59:50.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae" in namespace "projected-2775" to be "success or failure"
Aug 26 14:59:51.420: INFO: Pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae": Phase="Pending", Reason="", readiness=false. Elapsed: 616.60996ms
Aug 26 14:59:53.426: INFO: Pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623119065s
Aug 26 14:59:55.433: INFO: Pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629462813s
Aug 26 14:59:57.439: INFO: Pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.636184347s
STEP: Saw pod success
Aug 26 14:59:57.440: INFO: Pod "pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae" satisfied condition "success or failure"
Aug 26 14:59:57.444: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae container secret-volume-test: 
STEP: delete the pod
Aug 26 14:59:57.469: INFO: Waiting for pod pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae to disappear
Aug 26 14:59:57.502: INFO: Pod pod-projected-secrets-6a6745d9-2a70-4163-8263-fa5b6e791dae no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:57.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2775" for this suite.

• [SLOW TEST:8.015 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2291,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:57.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 26 14:59:58.172: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1201 /api/v1/namespaces/watch-1201/configmaps/e2e-watch-test-watch-closed eaca78ca-094a-456b-a7e3-5cf2f9ed27c3 3908271 0 2020-08-26 14:59:58 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 14:59:58.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1201 /api/v1/namespaces/watch-1201/configmaps/e2e-watch-test-watch-closed eaca78ca-094a-456b-a7e3-5cf2f9ed27c3 3908272 0 2020-08-26 14:59:58 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 26 14:59:58.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1201 /api/v1/namespaces/watch-1201/configmaps/e2e-watch-test-watch-closed eaca78ca-094a-456b-a7e3-5cf2f9ed27c3 3908273 0 2020-08-26 14:59:58 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 14:59:58.345: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1201 /api/v1/namespaces/watch-1201/configmaps/e2e-watch-test-watch-closed eaca78ca-094a-456b-a7e3-5cf2f9ed27c3 3908274 0 2020-08-26 14:59:58 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 14:59:58.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1201" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":142,"skipped":2296,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 14:59:58.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:00:03.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2" for this suite.

• [SLOW TEST:6.049 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":143,"skipped":2304,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:00:04.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:00:04.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd" in namespace "downward-api-3679" to be "success or failure"
Aug 26 15:00:05.577: INFO: Pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd": Phase="Pending", Reason="", readiness=false. Elapsed: 651.074925ms
Aug 26 15:00:07.583: INFO: Pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657407444s
Aug 26 15:00:09.588: INFO: Pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662953459s
Aug 26 15:00:11.688: INFO: Pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.762930149s
STEP: Saw pod success
Aug 26 15:00:11.689: INFO: Pod "downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd" satisfied condition "success or failure"
Aug 26 15:00:11.962: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd container client-container: 
STEP: delete the pod
Aug 26 15:00:12.533: INFO: Waiting for pod downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd to disappear
Aug 26 15:00:12.633: INFO: Pod downwardapi-volume-cd0ea8fa-bf1d-4952-a4c0-609681f390bd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:00:12.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3679" for this suite.

• [SLOW TEST:8.209 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2346,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:00:12.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:00:21.078: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:00:23.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:00:25.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734050821, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:00:29.013: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:00:29.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3819" for this suite.
STEP: Destroying namespace "webhook-3819-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.084 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":145,"skipped":2352,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:00:30.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6328
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6328
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6328
Aug 26 15:00:30.958: INFO: Found 0 stateful pods, waiting for 1
Aug 26 15:00:40.966: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 26 15:00:40.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:00:46.130: INFO: stderr: "I0826 15:00:45.531766    1714 log.go:172] (0x25dc150) (0x25dc310) Create stream\nI0826 15:00:45.533348    1714 log.go:172] (0x25dc150) (0x25dc310) Stream added, broadcasting: 1\nI0826 15:00:45.544285    1714 log.go:172] (0x25dc150) Reply frame received for 1\nI0826 15:00:45.545480    1714 log.go:172] (0x25dc150) (0x26ed880) Create stream\nI0826 15:00:45.545640    1714 log.go:172] (0x25dc150) (0x26ed880) Stream added, broadcasting: 3\nI0826 15:00:45.547357    1714 log.go:172] (0x25dc150) Reply frame received for 3\nI0826 15:00:45.547594    1714 log.go:172] (0x25dc150) (0x24b08c0) Create stream\nI0826 15:00:45.547653    1714 log.go:172] (0x25dc150) (0x24b08c0) Stream added, broadcasting: 5\nI0826 15:00:45.549013    1714 log.go:172] (0x25dc150) Reply frame received for 5\nI0826 15:00:45.601346    1714 log.go:172] (0x25dc150) Data frame received for 5\nI0826 15:00:45.601685    1714 log.go:172] (0x24b08c0) (5) Data frame handling\nI0826 15:00:45.602323    1714 log.go:172] (0x24b08c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:00:46.107939    1714 log.go:172] (0x25dc150) Data frame received for 3\nI0826 15:00:46.108237    1714 log.go:172] (0x26ed880) (3) Data frame handling\nI0826 15:00:46.108431    1714 log.go:172] (0x25dc150) Data frame received for 5\nI0826 15:00:46.108661    1714 log.go:172] (0x24b08c0) (5) Data frame handling\nI0826 15:00:46.109251    1714 log.go:172] (0x26ed880) (3) Data frame sent\nI0826 15:00:46.109485    1714 log.go:172] (0x25dc150) Data frame received for 3\nI0826 15:00:46.109668    1714 log.go:172] (0x26ed880) (3) Data frame handling\nI0826 15:00:46.109868    1714 log.go:172] (0x25dc150) Data frame received for 1\nI0826 15:00:46.110051    1714 log.go:172] (0x25dc310) (1) Data frame handling\nI0826 15:00:46.110281    1714 log.go:172] (0x25dc310) (1) Data frame sent\nI0826 15:00:46.111885    1714 log.go:172] (0x25dc150) (0x25dc310) Stream removed, broadcasting: 1\nI0826 15:00:46.113420    1714 log.go:172] (0x25dc150) Go away received\nI0826 15:00:46.116995    1714 log.go:172] (0x25dc150) (0x25dc310) Stream removed, broadcasting: 1\nI0826 15:00:46.117162    1714 log.go:172] (0x25dc150) (0x26ed880) Stream removed, broadcasting: 3\nI0826 15:00:46.117320    1714 log.go:172] (0x25dc150) (0x24b08c0) Stream removed, broadcasting: 5\n"
Aug 26 15:00:46.131: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:00:46.132: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:00:46.137: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 15:00:56.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:00:56.216: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:00:56.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99993351s
Aug 26 15:00:57.615: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984958437s
Aug 26 15:00:58.622: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.976849123s
Aug 26 15:00:59.639: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969578424s
Aug 26 15:01:00.645: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.953512345s
Aug 26 15:01:01.652: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.94678765s
Aug 26 15:01:02.659: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.940168131s
Aug 26 15:01:03.665: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.933205452s
Aug 26 15:01:04.671: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.927326964s
Aug 26 15:01:05.677: INFO: Verifying statefulset ss doesn't scale past 1 for another 921.009638ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6328
Aug 26 15:01:06.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:01:08.116: INFO: stderr: "I0826 15:01:08.020714    1744 log.go:172] (0x2c70620) (0x2c70690) Create stream\nI0826 15:01:08.022868    1744 log.go:172] (0x2c70620) (0x2c70690) Stream added, broadcasting: 1\nI0826 15:01:08.032087    1744 log.go:172] (0x2c70620) Reply frame received for 1\nI0826 15:01:08.032594    1744 log.go:172] (0x2c70620) (0x2c70850) Create stream\nI0826 15:01:08.032661    1744 log.go:172] (0x2c70620) (0x2c70850) Stream added, broadcasting: 3\nI0826 15:01:08.034025    1744 log.go:172] (0x2c70620) Reply frame received for 3\nI0826 15:01:08.034263    1744 log.go:172] (0x2c70620) (0x24a8c40) Create stream\nI0826 15:01:08.034328    1744 log.go:172] (0x2c70620) (0x24a8c40) Stream added, broadcasting: 5\nI0826 15:01:08.035379    1744 log.go:172] (0x2c70620) Reply frame received for 5\nI0826 15:01:08.098946    1744 log.go:172] (0x2c70620) Data frame received for 3\nI0826 15:01:08.099225    1744 log.go:172] (0x2c70620) Data frame received for 1\nI0826 15:01:08.099319    1744 log.go:172] (0x2c70850) (3) Data frame handling\nI0826 15:01:08.099777    1744 log.go:172] (0x2c70620) Data frame received for 5\nI0826 15:01:08.099867    1744 log.go:172] (0x24a8c40) (5) Data frame handling\nI0826 15:01:08.100057    1744 log.go:172] (0x2c70690) (1) Data frame handling\nI0826 15:01:08.100268    1744 log.go:172] (0x24a8c40) (5) Data frame sent\nI0826 15:01:08.100482    1744 log.go:172] (0x2c70690) (1) Data frame sent\nI0826 15:01:08.100607    1744 log.go:172] (0x2c70620) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 15:01:08.100712    1744 log.go:172] (0x24a8c40) (5) Data frame handling\nI0826 15:01:08.101555    1744 log.go:172] (0x2c70850) (3) Data frame sent\nI0826 15:01:08.101681    1744 log.go:172] (0x2c70620) Data frame received for 3\nI0826 15:01:08.101773    1744 log.go:172] (0x2c70620) (0x2c70690) Stream removed, broadcasting: 1\nI0826 15:01:08.102808    1744 log.go:172] (0x2c70850) (3) Data frame handling\nI0826 15:01:08.103726    1744 log.go:172] (0x2c70620) Go away received\nI0826 15:01:08.105857    1744 log.go:172] (0x2c70620) (0x2c70690) Stream removed, broadcasting: 1\nI0826 15:01:08.106034    1744 log.go:172] (0x2c70620) (0x2c70850) Stream removed, broadcasting: 3\nI0826 15:01:08.106184    1744 log.go:172] (0x2c70620) (0x24a8c40) Stream removed, broadcasting: 5\n"
Aug 26 15:01:08.117: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:01:08.117: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:01:08.121: INFO: Found 1 stateful pods, waiting for 3
Aug 26 15:01:18.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:01:18.192: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:01:18.192: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 15:01:28.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:01:28.130: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:01:28.130: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 26 15:01:28.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:01:29.563: INFO: stderr: "I0826 15:01:29.441634    1765 log.go:172] (0x28aa700) (0x28aa770) Create stream\nI0826 15:01:29.443696    1765 log.go:172] (0x28aa700) (0x28aa770) Stream added, broadcasting: 1\nI0826 15:01:29.457098    1765 log.go:172] (0x28aa700) Reply frame received for 1\nI0826 15:01:29.457728    1765 log.go:172] (0x28aa700) (0x27eaa80) Create stream\nI0826 15:01:29.457806    1765 log.go:172] (0x28aa700) (0x27eaa80) Stream added, broadcasting: 3\nI0826 15:01:29.458937    1765 log.go:172] (0x28aa700) Reply frame received for 3\nI0826 15:01:29.459142    1765 log.go:172] (0x28aa700) (0x27ead20) Create stream\nI0826 15:01:29.459201    1765 log.go:172] (0x28aa700) (0x27ead20) Stream added, broadcasting: 5\nI0826 15:01:29.460193    1765 log.go:172] (0x28aa700) Reply frame received for 5\nI0826 15:01:29.542013    1765 log.go:172] (0x28aa700) Data frame received for 3\nI0826 15:01:29.542384    1765 log.go:172] (0x28aa700) Data frame received for 5\nI0826 15:01:29.542563    1765 log.go:172] (0x28aa700) Data frame received for 1\nI0826 15:01:29.542715    1765 log.go:172] (0x28aa770) (1) Data frame handling\nI0826 15:01:29.542823    1765 log.go:172] (0x27ead20) (5) Data frame handling\nI0826 15:01:29.543039    1765 log.go:172] (0x27eaa80) (3) Data frame handling\nI0826 15:01:29.543960    1765 log.go:172] (0x27eaa80) (3) Data frame sent\nI0826 15:01:29.544196    1765 log.go:172] (0x28aa770) (1) Data frame sent\nI0826 15:01:29.544702    1765 log.go:172] (0x27ead20) (5) Data frame sent\nI0826 15:01:29.545392    1765 log.go:172] (0x28aa700) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:01:29.545511    1765 log.go:172] (0x27ead20) (5) Data frame handling\nI0826 15:01:29.546124    1765 log.go:172] (0x28aa700) Data frame received for 3\nI0826 15:01:29.546210    1765 log.go:172] (0x27eaa80) (3) Data frame handling\nI0826 15:01:29.547033    1765 log.go:172] (0x28aa700) (0x28aa770) Stream removed, broadcasting: 1\nI0826 15:01:29.547722    1765 log.go:172] (0x28aa700) Go away received\nI0826 15:01:29.551248    1765 log.go:172] (0x28aa700) (0x28aa770) Stream removed, broadcasting: 1\nI0826 15:01:29.551442    1765 log.go:172] (0x28aa700) (0x27eaa80) Stream removed, broadcasting: 3\nI0826 15:01:29.551608    1765 log.go:172] (0x28aa700) (0x27ead20) Stream removed, broadcasting: 5\n"
Aug 26 15:01:29.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:01:29.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:01:29.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:01:30.982: INFO: stderr: "I0826 15:01:30.820055    1787 log.go:172] (0x2b415e0) (0x2b41650) Create stream\nI0826 15:01:30.824919    1787 log.go:172] (0x2b415e0) (0x2b41650) Stream added, broadcasting: 1\nI0826 15:01:30.835800    1787 log.go:172] (0x2b415e0) Reply frame received for 1\nI0826 15:01:30.836487    1787 log.go:172] (0x2b415e0) (0x2b41810) Create stream\nI0826 15:01:30.836571    1787 log.go:172] (0x2b415e0) (0x2b41810) Stream added, broadcasting: 3\nI0826 15:01:30.838244    1787 log.go:172] (0x2b415e0) Reply frame received for 3\nI0826 15:01:30.838490    1787 log.go:172] (0x2b415e0) (0x2856150) Create stream\nI0826 15:01:30.838554    1787 log.go:172] (0x2b415e0) (0x2856150) Stream added, broadcasting: 5\nI0826 15:01:30.839662    1787 log.go:172] (0x2b415e0) Reply frame received for 5\nI0826 15:01:30.893846    1787 log.go:172] (0x2b415e0) Data frame received for 5\nI0826 15:01:30.894124    1787 log.go:172] (0x2856150) (5) Data frame handling\nI0826 15:01:30.894641    1787 log.go:172] (0x2856150) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:01:30.960425    1787 log.go:172] (0x2b415e0) Data frame received for 3\nI0826 15:01:30.960602    1787 log.go:172] (0x2b41810) (3) Data frame handling\nI0826 15:01:30.960905    1787 log.go:172] (0x2b415e0) Data frame received for 5\nI0826 15:01:30.961042    1787 log.go:172] (0x2856150) (5) Data frame handling\nI0826 15:01:30.961258    1787 log.go:172] (0x2b41810) (3) Data frame sent\nI0826 15:01:30.961416    1787 log.go:172] (0x2b415e0) Data frame received for 3\nI0826 15:01:30.961532    1787 log.go:172] (0x2b41810) (3) Data frame handling\nI0826 15:01:30.962797    1787 log.go:172] (0x2b415e0) Data frame received for 1\nI0826 15:01:30.962989    1787 log.go:172] (0x2b41650) (1) Data frame handling\nI0826 15:01:30.963180    1787 log.go:172] (0x2b41650) (1) Data frame sent\nI0826 15:01:30.964095    1787 log.go:172] (0x2b415e0) (0x2b41650) Stream removed, broadcasting: 1\nI0826 15:01:30.966871    1787 log.go:172] (0x2b415e0) Go away received\nI0826 15:01:30.968512    1787 log.go:172] (0x2b415e0) (0x2b41650) Stream removed, broadcasting: 1\nI0826 15:01:30.969220    1787 log.go:172] (0x2b415e0) (0x2b41810) Stream removed, broadcasting: 3\nI0826 15:01:30.969745    1787 log.go:172] (0x2b415e0) (0x2856150) Stream removed, broadcasting: 5\n"
Aug 26 15:01:30.983: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:01:30.983: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:01:30.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:01:33.015: INFO: stderr: "I0826 15:01:32.752877    1810 log.go:172] (0x2b4a150) (0x2b4a1c0) Create stream\nI0826 15:01:32.754858    1810 log.go:172] (0x2b4a150) (0x2b4a1c0) Stream added, broadcasting: 1\nI0826 15:01:32.764963    1810 log.go:172] (0x2b4a150) Reply frame received for 1\nI0826 15:01:32.765927    1810 log.go:172] (0x2b4a150) (0x2900070) Create stream\nI0826 15:01:32.766058    1810 log.go:172] (0x2b4a150) (0x2900070) Stream added, broadcasting: 3\nI0826 15:01:32.771621    1810 log.go:172] (0x2b4a150) Reply frame received for 3\nI0826 15:01:32.772517    1810 log.go:172] (0x2b4a150) (0x2974070) Create stream\nI0826 15:01:32.772827    1810 log.go:172] (0x2b4a150) (0x2974070) Stream added, broadcasting: 5\nI0826 15:01:32.777446    1810 log.go:172] (0x2b4a150) Reply frame received for 5\nI0826 15:01:32.853451    1810 log.go:172] (0x2b4a150) Data frame received for 5\nI0826 15:01:32.853853    1810 log.go:172] (0x2974070) (5) Data frame handling\nI0826 15:01:32.854506    1810 log.go:172] (0x2974070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:01:32.989777    1810 log.go:172] (0x2b4a150) Data frame received for 3\nI0826 15:01:32.990076    1810 log.go:172] (0x2900070) (3) Data frame handling\nI0826 15:01:32.990285    1810 log.go:172] (0x2900070) (3) Data frame sent\nI0826 15:01:32.990460    1810 log.go:172] (0x2b4a150) Data frame received for 3\nI0826 15:01:32.990636    1810 log.go:172] (0x2900070) (3) Data frame handling\nI0826 15:01:32.990993    1810 log.go:172] (0x2b4a150) Data frame received for 5\nI0826 15:01:32.991234    1810 log.go:172] (0x2974070) (5) Data frame handling\nI0826 15:01:32.991939    1810 log.go:172] (0x2b4a150) Data frame received for 1\nI0826 15:01:32.992044    1810 log.go:172] (0x2b4a1c0) (1) Data frame handling\nI0826 15:01:32.992155    1810 log.go:172] (0x2b4a1c0) (1) Data frame sent\nI0826 15:01:32.994434    1810 log.go:172] (0x2b4a150) (0x2b4a1c0) Stream removed, broadcasting: 1\nI0826 15:01:32.995539    1810 log.go:172] (0x2b4a150) Go away received\nI0826 15:01:32.999216    1810 log.go:172] (0x2b4a150) (0x2b4a1c0) Stream removed, broadcasting: 1\nI0826 15:01:32.999497    1810 log.go:172] (0x2b4a150) (0x2900070) Stream removed, broadcasting: 3\nI0826 15:01:32.999673    1810 log.go:172] (0x2b4a150) (0x2974070) Stream removed, broadcasting: 5\n"
Aug 26 15:01:33.016: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:01:33.016: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:01:33.016: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:01:33.119: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 26 15:01:43.383: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:01:43.383: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:01:43.383: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:01:43.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999983099s
Aug 26 15:01:44.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.635823597s
Aug 26 15:01:45.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.627340882s
Aug 26 15:01:46.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.619370172s
Aug 26 15:01:47.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.609341061s
Aug 26 15:01:48.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.598698756s
Aug 26 15:01:49.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.591669793s
Aug 26 15:01:51.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.580063316s
Aug 26 15:01:52.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.347865296s
Aug 26 15:01:53.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 314.963622ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6328
Aug 26 15:01:54.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:01:55.772: INFO: stderr: "I0826 15:01:55.639303    1834 log.go:172] (0x2978000) (0x2978070) Create stream\nI0826 15:01:55.642421    1834 log.go:172] (0x2978000) (0x2978070) Stream added, broadcasting: 1\nI0826 15:01:55.657454    1834 log.go:172] (0x2978000) Reply frame received for 1\nI0826 15:01:55.658094    1834 log.go:172] (0x2978000) (0x26ffc70) Create stream\nI0826 15:01:55.658166    1834 log.go:172] (0x2978000) (0x26ffc70) Stream added, broadcasting: 3\nI0826 15:01:55.659431    1834 log.go:172] (0x2978000) Reply frame received for 3\nI0826 15:01:55.659634    1834 log.go:172] (0x2978000) (0x24ae070) Create stream\nI0826 15:01:55.659707    1834 log.go:172] (0x2978000) (0x24ae070) Stream added, broadcasting: 5\nI0826 15:01:55.660824    1834 log.go:172] (0x2978000) Reply frame received for 5\nI0826 15:01:55.750600    1834 log.go:172] (0x2978000) Data frame received for 3\nI0826 15:01:55.751073    1834 log.go:172] (0x2978000) Data frame received for 5\nI0826 15:01:55.751380    1834 log.go:172] (0x2978000) Data frame received for 1\nI0826 15:01:55.751507    1834 log.go:172] (0x2978070) (1) Data frame handling\nI0826 15:01:55.751730    1834 log.go:172] (0x26ffc70) (3) Data frame handling\nI0826 15:01:55.752003    1834 log.go:172] (0x24ae070) (5) Data frame handling\nI0826 15:01:55.753081    1834 log.go:172] (0x24ae070) (5) Data frame sent\nI0826 15:01:55.753229    1834 log.go:172] (0x26ffc70) (3) Data frame sent\nI0826 15:01:55.753487    1834 log.go:172] (0x2978000) Data frame received for 3\nI0826 15:01:55.753560    1834 log.go:172] (0x26ffc70) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 15:01:55.753871    1834 log.go:172] (0x2978070) (1) Data frame sent\nI0826 15:01:55.753998    1834 log.go:172] (0x2978000) Data frame received for 5\nI0826 15:01:55.754106    1834 log.go:172] (0x24ae070) (5) Data frame handling\nI0826 15:01:55.754736    1834 log.go:172] (0x2978000) (0x2978070) Stream removed, broadcasting: 1\nI0826 15:01:55.757010    1834 log.go:172] (0x2978000) Go away received\nI0826 15:01:55.759292    1834 log.go:172] (0x2978000) (0x2978070) Stream removed, broadcasting: 1\nI0826 15:01:55.759509    1834 log.go:172] (0x2978000) (0x26ffc70) Stream removed, broadcasting: 3\nI0826 15:01:55.759636    1834 log.go:172] (0x2978000) (0x24ae070) Stream removed, broadcasting: 5\n"
Aug 26 15:01:55.774: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:01:55.774: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:01:55.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:01:57.154: INFO: stderr: "I0826 15:01:57.051556    1857 log.go:172] (0x28a0070) (0x28a00e0) Create stream\nI0826 15:01:57.053332    1857 log.go:172] (0x28a0070) (0x28a00e0) Stream added, broadcasting: 1\nI0826 15:01:57.067934    1857 log.go:172] (0x28a0070) Reply frame received for 1\nI0826 15:01:57.068350    1857 log.go:172] (0x28a0070) (0x24bc770) Create stream\nI0826 15:01:57.068413    1857 log.go:172] (0x28a0070) (0x24bc770) Stream added, broadcasting: 3\nI0826 15:01:57.079398    1857 log.go:172] (0x28a0070) Reply frame received for 3\nI0826 15:01:57.081104    1857 log.go:172] (0x28a0070) (0x25f6e70) Create stream\nI0826 15:01:57.081229    1857 log.go:172] (0x28a0070) (0x25f6e70) Stream added, broadcasting: 5\nI0826 15:01:57.084365    1857 log.go:172] (0x28a0070) Reply frame received for 5\nI0826 15:01:57.133550    1857 log.go:172] (0x28a0070) Data frame received for 3\nI0826 15:01:57.133937    1857 log.go:172] (0x28a0070) Data frame received for 5\nI0826 15:01:57.134266    1857 log.go:172] (0x28a0070) Data frame received for 1\nI0826 15:01:57.134500    1857 log.go:172] (0x28a00e0) (1) Data frame handling\nI0826 15:01:57.134815    1857 log.go:172] (0x25f6e70) (5) Data frame handling\nI0826 15:01:57.135078    1857 log.go:172] (0x24bc770) (3) Data frame handling\nI0826 15:01:57.135308    1857 log.go:172] (0x25f6e70) (5) Data frame sent\nI0826 15:01:57.135506    1857 log.go:172] (0x24bc770) (3) Data frame sent\nI0826 15:01:57.135682    1857 log.go:172] (0x28a00e0) (1) Data frame sent\nI0826 15:01:57.135870    1857 log.go:172] (0x28a0070) Data frame received for 3\nI0826 15:01:57.135925    1857 log.go:172] (0x24bc770) (3) Data frame handling\nI0826 15:01:57.136019    1857 log.go:172] (0x28a0070) Data frame received for 5\nI0826 15:01:57.136142    1857 log.go:172] (0x25f6e70) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 15:01:57.138148    1857 log.go:172] (0x28a0070) (0x28a00e0) Stream removed, broadcasting: 1\nI0826 15:01:57.138711    1857 log.go:172] (0x28a0070) Go away received\nI0826 15:01:57.141305    1857 log.go:172] (0x28a0070) (0x28a00e0) Stream removed, broadcasting: 1\nI0826 15:01:57.141662    1857 log.go:172] (0x28a0070) (0x24bc770) Stream removed, broadcasting: 3\nI0826 15:01:57.141848    1857 log.go:172] (0x28a0070) (0x25f6e70) Stream removed, broadcasting: 5\n"
Aug 26 15:01:57.154: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:01:57.155: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:01:57.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:01:58.726: INFO: rc: 1
Aug 26 15:01:58.727: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 26 15:02:08.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:02:09.998: INFO: rc: 1
Aug 26 15:02:09.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 26 15:02:19.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:02:21.156: INFO: rc: 1
Aug 26 15:02:21.157: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:02:31.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:02:32.318: INFO: rc: 1
Aug 26 15:02:32.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:02:42.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:02:43.423: INFO: rc: 1
Aug 26 15:02:43.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:02:53.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:02:54.525: INFO: rc: 1
Aug 26 15:02:54.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:03:04.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:03:05.748: INFO: rc: 1
Aug 26 15:03:05.749: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:03:15.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:03:16.874: INFO: rc: 1
Aug 26 15:03:16.874: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:03:26.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:03:28.021: INFO: rc: 1
Aug 26 15:03:28.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:03:38.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:03:39.231: INFO: rc: 1
Aug 26 15:03:39.232: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:03:49.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:03:50.423: INFO: rc: 1
Aug 26 15:03:50.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:00.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:01.574: INFO: rc: 1
Aug 26 15:04:01.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:11.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:12.715: INFO: rc: 1
Aug 26 15:04:12.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:22.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:23.844: INFO: rc: 1
Aug 26 15:04:23.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:33.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:35.175: INFO: rc: 1
Aug 26 15:04:35.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:45.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:46.267: INFO: rc: 1
Aug 26 15:04:46.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:04:56.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:04:57.535: INFO: rc: 1
Aug 26 15:04:57.535: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:05:07.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:05:08.645: INFO: rc: 1
Aug 26 15:05:08.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:05:18.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:05:19.753: INFO: rc: 1
Aug 26 15:05:19.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:05:29.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:05:30.858: INFO: rc: 1
Aug 26 15:05:30.858: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:05:40.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:05:42.026: INFO: rc: 1
Aug 26 15:05:42.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:05:52.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:05:53.085: INFO: rc: 1
Aug 26 15:05:53.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:03.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:06:04.206: INFO: rc: 1
Aug 26 15:06:04.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:14.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:06:15.361: INFO: rc: 1
Aug 26 15:06:15.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:25.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:06:26.533: INFO: rc: 1
Aug 26 15:06:26.534: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:36.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:06:38.674: INFO: rc: 1
Aug 26 15:06:38.675: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:48.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:06:49.793: INFO: rc: 1
Aug 26 15:06:49.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 15:06:59.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6328 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:07:00.976: INFO: rc: 1
Aug 26 15:07:00.976: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 26 15:07:00.977: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 15:07:01.124: INFO: Deleting all statefulset in ns statefulset-6328
Aug 26 15:07:01.130: INFO: Scaling statefulset ss to 0
Aug 26 15:07:01.144: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:07:01.147: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:07:01.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6328" for this suite.

• [SLOW TEST:390.578 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":146,"skipped":2362,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:07:01.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3863
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3863
STEP: creating replication controller externalsvc in namespace services-3863
I0826 15:07:01.835169       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3863, replica count: 2
I0826 15:07:04.886944       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 15:07:07.887572       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 15:07:10.888369       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 26 15:07:11.053: INFO: Creating new exec pod
Aug 26 15:07:17.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3863 execpodr5jhk -- /bin/sh -x -c nslookup nodeport-service'
Aug 26 15:07:19.987: INFO: stderr: "I0826 15:07:18.627007    2501 log.go:172] (0x2709ce0) (0x2709e30) Create stream\nI0826 15:07:18.629547    2501 log.go:172] (0x2709ce0) (0x2709e30) Stream added, broadcasting: 1\nI0826 15:07:18.638490    2501 log.go:172] (0x2709ce0) Reply frame received for 1\nI0826 15:07:18.638976    2501 log.go:172] (0x2709ce0) (0x2926070) Create stream\nI0826 15:07:18.639056    2501 log.go:172] (0x2709ce0) (0x2926070) Stream added, broadcasting: 3\nI0826 15:07:18.640330    2501 log.go:172] (0x2709ce0) Reply frame received for 3\nI0826 15:07:18.640591    2501 log.go:172] (0x2709ce0) (0x27ea070) Create stream\nI0826 15:07:18.640662    2501 log.go:172] (0x2709ce0) (0x27ea070) Stream added, broadcasting: 5\nI0826 15:07:18.641699    2501 log.go:172] (0x2709ce0) Reply frame received for 5\nI0826 15:07:18.700548    2501 log.go:172] (0x2709ce0) Data frame received for 5\nI0826 15:07:18.701009    2501 log.go:172] (0x27ea070) (5) Data frame handling\nI0826 15:07:18.701751    2501 log.go:172] (0x27ea070) (5) Data frame sent\n+ nslookup nodeport-service\nI0826 15:07:19.952892    2501 log.go:172] (0x2709ce0) Data frame received for 3\nI0826 15:07:19.953035    2501 log.go:172] (0x2926070) (3) Data frame handling\nI0826 15:07:19.953115    2501 log.go:172] (0x2926070) (3) Data frame sent\nI0826 15:07:19.953775    2501 log.go:172] (0x2709ce0) Data frame received for 3\nI0826 15:07:19.953893    2501 log.go:172] (0x2926070) (3) Data frame handling\nI0826 15:07:19.954044    2501 log.go:172] (0x2926070) (3) Data frame sent\nI0826 15:07:19.954554    2501 log.go:172] (0x2709ce0) Data frame received for 3\nI0826 15:07:19.954767    2501 log.go:172] (0x2926070) (3) Data frame handling\nI0826 15:07:19.955308    2501 log.go:172] (0x2709ce0) Data frame received for 5\nI0826 15:07:19.955521    2501 log.go:172] (0x27ea070) (5) Data frame handling\nI0826 15:07:19.959722    2501 log.go:172] (0x2709ce0) Data frame received for 1\nI0826 15:07:19.959827    2501 log.go:172] (0x2709e30) (1) Data frame handling\nI0826 15:07:19.959941    2501 log.go:172] (0x2709e30) (1) Data frame sent\nI0826 15:07:19.960355    2501 log.go:172] (0x2709ce0) (0x2709e30) Stream removed, broadcasting: 1\nI0826 15:07:19.962908    2501 log.go:172] (0x2709ce0) (0x2709e30) Stream removed, broadcasting: 1\nI0826 15:07:19.964109    2501 log.go:172] (0x2709ce0) (0x2926070) Stream removed, broadcasting: 3\nI0826 15:07:19.964611    2501 log.go:172] (0x2709ce0) Go away received\nI0826 15:07:19.964939    2501 log.go:172] (0x2709ce0) (0x27ea070) Stream removed, broadcasting: 5\n"
Aug 26 15:07:19.988: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3863.svc.cluster.local\tcanonical name = externalsvc.services-3863.svc.cluster.local.\nName:\texternalsvc.services-3863.svc.cluster.local\nAddress: 10.104.110.49\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3863, will wait for the garbage collector to delete the pods
Aug 26 15:07:20.889: INFO: Deleting ReplicationController externalsvc took: 616.893643ms
Aug 26 15:07:21.790: INFO: Terminating ReplicationController externalsvc pods took: 900.803355ms
Aug 26 15:07:33.714: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:07:34.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3863" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.154 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":147,"skipped":2367,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:07:34.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:07:46.599: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:07:48.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:07:50.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:07:52.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051266, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:07:55.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:07:55.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-521" for this suite.
STEP: Destroying namespace "webhook-521-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:23.067 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":148,"skipped":2378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:07:57.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Aug 26 15:07:59.148: INFO: Waiting up to 5m0s for pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d" in namespace "containers-3015" to be "success or failure"
Aug 26 15:07:59.258: INFO: Pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d": Phase="Pending", Reason="", readiness=false. Elapsed: 109.540988ms
Aug 26 15:08:01.479: INFO: Pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330555091s
Aug 26 15:08:03.605: INFO: Pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456449658s
Aug 26 15:08:05.749: INFO: Pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.599988829s
STEP: Saw pod success
Aug 26 15:08:05.749: INFO: Pod "client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d" satisfied condition "success or failure"
Aug 26 15:08:05.754: INFO: Trying to get logs from node jerma-worker pod client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d container test-container: 
STEP: delete the pod
Aug 26 15:08:05.842: INFO: Waiting for pod client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d to disappear
Aug 26 15:08:05.903: INFO: Pod client-containers-b1a6abd8-218b-499b-a8f2-15bce39b672d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:08:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3015" for this suite.

• [SLOW TEST:8.378 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2400,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:08:05.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 15:08:14.609: INFO: Successfully updated pod "annotationupdate6dbaae03-4838-4bf4-8d7c-32a3ac9c25cd"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:08:17.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5181" for this suite.

• [SLOW TEST:12.437 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2418,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:08:18.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 26 15:08:20.858: INFO: Waiting up to 5m0s for pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd" in namespace "var-expansion-6752" to be "success or failure"
Aug 26 15:08:21.149: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 290.566191ms
Aug 26 15:08:23.155: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296810141s
Aug 26 15:08:25.678: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82040101s
Aug 26 15:08:27.712: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.853826435s
Aug 26 15:08:30.425: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.566849308s
Aug 26 15:08:33.060: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.202124898s
Aug 26 15:08:35.120: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.262357102s
STEP: Saw pod success
Aug 26 15:08:35.121: INFO: Pod "var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd" satisfied condition "success or failure"
Aug 26 15:08:35.460: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd container dapi-container: 
STEP: delete the pod
Aug 26 15:08:35.767: INFO: Waiting for pod var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd to disappear
Aug 26 15:08:36.545: INFO: Pod var-expansion-faf8e056-e9d7-4818-a8f3-f3da54ef1fdd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:08:36.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6752" for this suite.

• [SLOW TEST:18.370 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2457,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:08:36.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:08:37.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4031" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":152,"skipped":2495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:08:37.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:08:49.744: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:08:53.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051331, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:08:55.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051331, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:08:57.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051331, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:08:59.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051331, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051329, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:09:03.491: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:09:21.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8983" for this suite.
STEP: Destroying namespace "webhook-8983-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:47.366 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":153,"skipped":2524,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:09:24.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:09:33.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6705" for this suite.

• [SLOW TEST:9.086 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":154,"skipped":2544,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:09:33.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:09:34.067: INFO: Creating deployment "webserver-deployment"
Aug 26 15:09:34.088: INFO: Waiting for observed generation 1
Aug 26 15:09:36.113: INFO: Waiting for all required pods to come up
Aug 26 15:09:36.682: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 26 15:09:56.859: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 26 15:09:56.869: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 26 15:09:56.882: INFO: Updating deployment webserver-deployment
Aug 26 15:09:56.882: INFO: Waiting for observed generation 2
Aug 26 15:09:59.449: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 26 15:09:59.959: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 26 15:10:00.326: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 15:10:00.701: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 26 15:10:00.701: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 26 15:10:00.705: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 15:10:00.715: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 26 15:10:00.715: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 26 15:10:00.722: INFO: Updating deployment webserver-deployment
Aug 26 15:10:00.723: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 26 15:10:00.914: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 26 15:10:03.265: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 15:10:04.389: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2490 /apis/apps/v1/namespaces/deployment-2490/deployments/webserver-deployment 27452234-9e82-403e-97a5-fea6dbb18cdc 3910764 3 2020-08-26 15:09:34 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x892f9c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 15:10:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-26 15:10:01 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 26 15:10:04.610: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-2490 /apis/apps/v1/namespaces/deployment-2490/replicasets/webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 3910759 3 2020-08-26 15:09:56 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 27452234-9e82-403e-97a5-fea6dbb18cdc 0x8a665b7 0x8a665b8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8a66628  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:10:04.610: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 26 15:10:04.610: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-2490 /apis/apps/v1/namespaces/deployment-2490/replicasets/webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 3910742 3 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 27452234-9e82-403e-97a5-fea6dbb18cdc 0x8a663b7 0x8a663b8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x8a66558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:10:04.759: INFO: Pod "webserver-deployment-595b5b9587-478nb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-478nb webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-478nb 89639b67-db0f-44be-a4c8-b0784ccbfd18 3910769 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e00e7 0x81e00e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.761: INFO: Pod "webserver-deployment-595b5b9587-6pzvm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6pzvm webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-6pzvm b53edb94-ec7e-459a-ac9c-5b405913637f 3910741 0 2020-08-26 15:10:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0257 0x81e0258}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.762: INFO: Pod "webserver-deployment-595b5b9587-7vzr8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7vzr8 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-7vzr8 8120e9f6-c075-44ed-a230-2312ad9b9690 3910756 0 2020-08-26 15:10:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e03b7 0x81e03b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.763: INFO: Pod "webserver-deployment-595b5b9587-8bdkv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8bdkv webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-8bdkv 3cfcdc79-5d2c-4ce3-8a21-9c48a13e20ba 3910785 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0537 0x81e0538}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.765: INFO: Pod "webserver-deployment-595b5b9587-8v52r" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8v52r webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-8v52r f2ff3dcc-540d-4937-be41-1802ccd12af7 3910601 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0697 0x81e0698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.227,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bae2965c7d83fe7c12a87a45b360a29fcf9369b232022f0a27cf9edf1b13c673,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.767: INFO: Pod "webserver-deployment-595b5b9587-9cjp6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9cjp6 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-9cjp6 b503f00b-c27c-4520-ab93-1f0cd42573a9 3910587 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0817 0x81e0818}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.225,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c0701b4ca475ee0c867ce447e2563fad2820903ddf14261e26bac696d4e1daa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.768: INFO: Pod "webserver-deployment-595b5b9587-9hgpd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hgpd webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-9hgpd bc1f8bda-a957-4b84-ad4d-9b700293b0fa 3910730 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0997 0x81e0998}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.769: INFO: Pod "webserver-deployment-595b5b9587-9hxkk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hxkk webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-9hxkk 9bc0d341-7e83-4be8-bb70-f1f678234076 3910576 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0ab7 0x81e0ab8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.224,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6d501155bca4730fb039b062991db2b4b516d97c8fb564f88a1dc64c4d352523,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.771: INFO: Pod "webserver-deployment-595b5b9587-bpsxj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bpsxj webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-bpsxj 3f4b3dce-c28f-44da-b651-c9dfa08a9e84 3910770 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0c37 0x81e0c38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.772: INFO: Pod "webserver-deployment-595b5b9587-dw5cb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dw5cb webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-dw5cb cfa72b87-1925-456f-bf52-e673978f9b72 3910553 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0d97 0x81e0d98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.101,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c60a0fad559e5108476dcbbf9b1e93c0ac025ce2f47f5ad6d435dca4ccc029b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.773: INFO: Pod "webserver-deployment-595b5b9587-gs27l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gs27l webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-gs27l 819d6302-0f36-421b-91a8-cd7703612cb5 3910748 0 2020-08-26 15:10:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e0f17 0x81e0f18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.774: INFO: Pod "webserver-deployment-595b5b9587-hpz74" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hpz74 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-hpz74 a89dc850-ab64-4a37-a3c1-bb9aec8d7b36 3910725 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e1077 0x81e1078}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.775: INFO: Pod "webserver-deployment-595b5b9587-ldjb4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ldjb4 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-ldjb4 95864717-7220-4b9e-b73b-f2c17261633b 3910586 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x81e1197 0x81e1198}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.103,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2345bf950b9b1b91559adc3defde20f3115b04df4ea467c71a05dca23f155e6a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.776: INFO: Pod "webserver-deployment-595b5b9587-nmsr9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nmsr9 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-nmsr9 41d922d0-ca48-4470-b12c-cdc86837403e 3910793 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x9230017 0x9230018}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.778: INFO: Pod "webserver-deployment-595b5b9587-pn9qb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pn9qb webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-pn9qb d28acee6-55dc-4243-92cd-64979ad4cbcc 3910572 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x9230177 0x9230178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.102,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ab1ad3c00946b85ed0f558f81bd8ea2911e2458640be31bffd1910ee1acbd4cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.779: INFO: Pod "webserver-deployment-595b5b9587-smkvk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-smkvk webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-smkvk a52b7360-96c2-4352-b064-5426be6225e9 3910800 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x92302f7 0x92302f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.780: INFO: Pod "webserver-deployment-595b5b9587-t9527" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t9527 webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-t9527 8397c692-c1f4-4bad-8b9a-e70943939719 3910591 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x9230457 0x9230458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.104,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://67e8433e8d594b101b76d6b64466e970740182c5d905e359a28fb8b6be50f0bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.781: INFO: Pod "webserver-deployment-595b5b9587-tzwwd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tzwwd webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-tzwwd 643779d7-c4f6-46f5-a020-2fa124f3c872 3910729 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x92305d7 0x92305d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.782: INFO: Pod "webserver-deployment-595b5b9587-w944t" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w944t webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-w944t 2fe4bc08-15a2-47e7-98f9-91149c9aced5 3910569 0 2020-08-26 15:09:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x92306f7 0x92306f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.223,StartTime:2020-08-26 15:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:09:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1518b10e4012d7628087e138dcc575400205fb4d0229d721caf82dcbdd22678e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.784: INFO: Pod "webserver-deployment-595b5b9587-x7b8d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x7b8d webserver-deployment-595b5b9587- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-595b5b9587-x7b8d b22fb2a1-ce8f-48b8-9c6d-b4fae08d43f8 3910772 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dde65079-a100-4674-b736-0b6f583f065c 0x9230877 0x9230878}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.785: INFO: Pod "webserver-deployment-c7997dcc8-4mgcz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4mgcz webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-4mgcz 47dd30e9-229a-4f5d-b8d4-884d6ec1f970 3910778 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x92309e7 0x92309e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.787: INFO: Pod "webserver-deployment-c7997dcc8-56t8z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-56t8z webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-56t8z 5a8c1a2f-e0e3-4bae-bcf0-b4965c95a7a1 3910761 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9230b67 0x9230b68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.788: INFO: Pod "webserver-deployment-c7997dcc8-5q4gs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5q4gs webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-5q4gs 592c2e1a-8ed2-402b-8fc1-4aa2b9572ea3 3910758 0 2020-08-26 15:10:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9230ce7 0x9230ce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.790: INFO: Pod "webserver-deployment-c7997dcc8-792dq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-792dq webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-792dq 3bfb535c-493c-4319-8d47-bef6aed9efff 3910721 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9230e67 0x9230e68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.791: INFO: Pod "webserver-deployment-c7997dcc8-86rzg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-86rzg webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-86rzg 084f537c-3784-4f13-956b-fc1a5cbb621e 3910786 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9230f97 0x9230f98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.793: INFO: Pod "webserver-deployment-c7997dcc8-hqgpz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hqgpz webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-hqgpz eba54e53-26ab-452a-b6f5-cf2c663394a2 3910665 0 2020-08-26 15:09:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9231117 0x9231118}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:09:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.794: INFO: Pod "webserver-deployment-c7997dcc8-j4pqk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j4pqk webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-j4pqk eca62c23-f6f4-4245-90ef-0e0e9b8f81f8 3910635 0 2020-08-26 15:09:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x92312a7 0x92312a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.796: INFO: Pod "webserver-deployment-c7997dcc8-j5svt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j5svt webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-j5svt 317d7151-aaaf-4a96-99ad-558492ca2c20 3910765 0 2020-08-26 15:09:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9231427 0x9231428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.228,StartTime:2020-08-26 15:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.797: INFO: Pod "webserver-deployment-c7997dcc8-jzldg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jzldg webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-jzldg 7c5fbaa4-b43c-425d-8a54-900d23cb56ba 3910723 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x92315f7 0x92315f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.799: INFO: Pod "webserver-deployment-c7997dcc8-nt7nn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nt7nn webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-nt7nn 1cb755a8-003d-41ba-8415-2d3211035e06 3910643 0 2020-08-26 15:09:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9231727 0x9231728}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.800: INFO: Pod "webserver-deployment-c7997dcc8-rx6sz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rx6sz webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-rx6sz d0e9f696-e445-482b-aac2-2d139f8b8434 3910666 0 2020-08-26 15:09:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x92318a7 0x92318a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:09:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:09:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.802: INFO: Pod "webserver-deployment-c7997dcc8-tcgcx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tcgcx webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-tcgcx c760a822-0555-4924-916c-b9f3f8e44700 3910734 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9231a27 0x9231a28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 15:10:04.803: INFO: Pod "webserver-deployment-c7997dcc8-tx8hh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tx8hh webserver-deployment-c7997dcc8- deployment-2490 /api/v1/namespaces/deployment-2490/pods/webserver-deployment-c7997dcc8-tx8hh 76c3b23d-99b9-45e8-9a21-5a3cdbcfa898 3910801 0 2020-08-26 15:10:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 18c19e55-f60f-422a-979d-9116efdda47c 0x9231b57 0x9231b58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k7wbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k7wbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k7wbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:10:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 15:10:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:10:04.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2490" for this suite.

• [SLOW TEST:31.466 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":155,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:10:05.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-c520f8cc-27aa-4150-a0ad-e53fb175bcba
STEP: Creating a pod to test consume secrets
Aug 26 15:10:09.964: INFO: Waiting up to 5m0s for pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920" in namespace "secrets-5609" to be "success or failure"
Aug 26 15:10:10.648: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 683.803167ms
Aug 26 15:10:12.676: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712172178s
Aug 26 15:10:14.914: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 4.950721429s
Aug 26 15:10:17.098: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134781414s
Aug 26 15:10:19.369: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 9.405623652s
Aug 26 15:10:22.312: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 12.348641933s
Aug 26 15:10:24.432: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 14.46842074s
Aug 26 15:10:26.661: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Pending", Reason="", readiness=false. Elapsed: 16.697434288s
Aug 26 15:10:29.012: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 19.048605444s
Aug 26 15:10:31.383: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 21.418918313s
Aug 26 15:10:33.443: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 23.479369614s
Aug 26 15:10:35.626: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 25.662143902s
Aug 26 15:10:37.674: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 27.71022531s
Aug 26 15:10:40.091: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Running", Reason="", readiness=true. Elapsed: 30.127753942s
Aug 26 15:10:42.152: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.187930476s
STEP: Saw pod success
Aug 26 15:10:42.152: INFO: Pod "pod-secrets-259cf101-af3b-42a8-b370-1726b074f920" satisfied condition "success or failure"
Aug 26 15:10:42.301: INFO: Trying to get logs from node jerma-worker pod pod-secrets-259cf101-af3b-42a8-b370-1726b074f920 container secret-volume-test: 
STEP: delete the pod
Aug 26 15:10:42.707: INFO: Waiting for pod pod-secrets-259cf101-af3b-42a8-b370-1726b074f920 to disappear
Aug 26 15:10:42.737: INFO: Pod pod-secrets-259cf101-af3b-42a8-b370-1726b074f920 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:10:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5609" for this suite.

• [SLOW TEST:37.616 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2586,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:10:42.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-4d28d011-6611-41c9-a2b7-42cc959ffa3f
STEP: Creating a pod to test consume secrets
Aug 26 15:10:44.088: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894" in namespace "projected-858" to be "success or failure"
Aug 26 15:10:44.150: INFO: Pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894": Phase="Pending", Reason="", readiness=false. Elapsed: 62.169338ms
Aug 26 15:10:46.157: INFO: Pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068921797s
Aug 26 15:10:48.408: INFO: Pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319847891s
Aug 26 15:10:50.648: INFO: Pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.559727017s
STEP: Saw pod success
Aug 26 15:10:50.648: INFO: Pod "pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894" satisfied condition "success or failure"
Aug 26 15:10:50.653: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 15:10:50.723: INFO: Waiting for pod pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894 to disappear
Aug 26 15:10:50.892: INFO: Pod pod-projected-secrets-9bd0530f-53d5-4f50-b68d-617e3e22a894 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:10:50.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-858" for this suite.

• [SLOW TEST:8.075 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2597,"failed":0}
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:10:50.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 15:10:51.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4787'
Aug 26 15:11:08.504: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 15:11:08.505: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 26 15:11:10.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4787'
Aug 26 15:11:12.001: INFO: stderr: ""
Aug 26 15:11:12.002: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:11:12.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4787" for this suite.

• [SLOW TEST:21.109 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625
    should create a deployment from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":158,"skipped":2597,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:11:12.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:11:27.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:11:30.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:32.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:34.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:36.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:38.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:40.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:11:43.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051488, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051487, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:11:47.107: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:11:47.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2514-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:11:54.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9382" for this suite.
STEP: Destroying namespace "webhook-9382-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:47.401 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":159,"skipped":2609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:11:59.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 15:12:01.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 15:12:02.213: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 15:12:02.286: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 15:12:02.299: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.299: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:12:02.299: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.299: INFO: 	Container app ready: true, restart count 0
Aug 26 15:12:02.299: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.299: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:12:02.299: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 15:12:02.599: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.599: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:12:02.599: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.599: INFO: 	Container httpd ready: true, restart count 0
Aug 26 15:12:02.599: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.599: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:12:02.599: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:12:02.599: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162eda1a037ad75f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162eda1a22c90f48], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:12:04.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8792" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":160,"skipped":2651,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:12:04.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 15:12:04.293: INFO: Waiting up to 5m0s for pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7" in namespace "emptydir-8126" to be "success or failure"
Aug 26 15:12:04.328: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.604755ms
Aug 26 15:12:07.853: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.559715679s
Aug 26 15:12:10.313: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019008738s
Aug 26 15:12:12.365: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07139635s
Aug 26 15:12:15.044: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Running", Reason="", readiness=true. Elapsed: 10.750534355s
Aug 26 15:12:17.050: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.756093642s
STEP: Saw pod success
Aug 26 15:12:17.050: INFO: Pod "pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7" satisfied condition "success or failure"
Aug 26 15:12:17.053: INFO: Trying to get logs from node jerma-worker pod pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7 container test-container: 
STEP: delete the pod
Aug 26 15:12:17.567: INFO: Waiting for pod pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7 to disappear
Aug 26 15:12:17.844: INFO: Pod pod-a0fff6b3-7013-4fc9-8901-4f45f571cfa7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:12:17.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8126" for this suite.

• [SLOW TEST:14.048 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2658,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:12:18.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 15:12:19.751: INFO: Waiting up to 5m0s for pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042" in namespace "downward-api-9117" to be "success or failure"
Aug 26 15:12:19.776: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Pending", Reason="", readiness=false. Elapsed: 24.206117ms
Aug 26 15:12:21.782: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03084185s
Aug 26 15:12:24.849: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Pending", Reason="", readiness=false. Elapsed: 5.097324546s
Aug 26 15:12:27.091: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Pending", Reason="", readiness=false. Elapsed: 7.339334038s
Aug 26 15:12:29.460: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Running", Reason="", readiness=true. Elapsed: 9.708409139s
Aug 26 15:12:31.625: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.873080258s
STEP: Saw pod success
Aug 26 15:12:31.625: INFO: Pod "downward-api-982baede-4c19-4af3-bfc7-740b740c0042" satisfied condition "success or failure"
Aug 26 15:12:31.941: INFO: Trying to get logs from node jerma-worker pod downward-api-982baede-4c19-4af3-bfc7-740b740c0042 container dapi-container: 
STEP: delete the pod
Aug 26 15:12:33.466: INFO: Waiting for pod downward-api-982baede-4c19-4af3-bfc7-740b740c0042 to disappear
Aug 26 15:12:33.809: INFO: Pod downward-api-982baede-4c19-4af3-bfc7-740b740c0042 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:12:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9117" for this suite.

• [SLOW TEST:16.420 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2668,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:12:34.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:12:37.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491" in namespace "downward-api-2550" to be "success or failure"
Aug 26 15:12:37.405: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Pending", Reason="", readiness=false. Elapsed: 65.64994ms
Aug 26 15:12:39.521: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181714746s
Aug 26 15:12:41.745: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404775275s
Aug 26 15:12:43.961: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621092669s
Aug 26 15:12:46.422: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Pending", Reason="", readiness=false. Elapsed: 9.081857852s
Aug 26 15:12:48.750: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Running", Reason="", readiness=true. Elapsed: 11.41022226s
Aug 26 15:12:50.935: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.595183501s
STEP: Saw pod success
Aug 26 15:12:50.935: INFO: Pod "downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491" satisfied condition "success or failure"
Aug 26 15:12:50.940: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491 container client-container: 
STEP: delete the pod
Aug 26 15:12:51.362: INFO: Waiting for pod downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491 to disappear
Aug 26 15:12:51.911: INFO: Pod downwardapi-volume-1fd23dc6-bd17-43aa-804e-5c7a9b97b491 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:12:51.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2550" for this suite.

• [SLOW TEST:17.390 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2673,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:12:51.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6265
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6265
STEP: Creating statefulset with conflicting port in namespace statefulset-6265
STEP: Waiting until pod test-pod will start running in namespace statefulset-6265
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6265
Aug 26 15:13:05.847: INFO: Observed stateful pod in namespace: statefulset-6265, name: ss-0, uid: 90d168e3-57c1-4e2f-bce5-0e1ce304d279, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 15:13:05.847: INFO: Observed stateful pod in namespace: statefulset-6265, name: ss-0, uid: 90d168e3-57c1-4e2f-bce5-0e1ce304d279, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 15:13:06.505: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6265
STEP: Removing pod with conflicting port in namespace statefulset-6265
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6265 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 15:13:18.193: INFO: Deleting all statefulset in ns statefulset-6265
Aug 26 15:13:18.198: INFO: Scaling statefulset ss to 0
Aug 26 15:13:28.533: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:13:28.768: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:13:29.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6265" for this suite.

• [SLOW TEST:37.831 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":164,"skipped":2673,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:13:29.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 15:13:31.782: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:32.216: INFO: Number of nodes with available pods: 0
Aug 26 15:13:32.216: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:33.237: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:34.189: INFO: Number of nodes with available pods: 0
Aug 26 15:13:34.189: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:34.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:35.624: INFO: Number of nodes with available pods: 0
Aug 26 15:13:35.625: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:36.770: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:37.164: INFO: Number of nodes with available pods: 0
Aug 26 15:13:37.164: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:37.248: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:38.195: INFO: Number of nodes with available pods: 0
Aug 26 15:13:38.195: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:38.291: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:38.423: INFO: Number of nodes with available pods: 0
Aug 26 15:13:38.423: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:39.246: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:39.779: INFO: Number of nodes with available pods: 0
Aug 26 15:13:39.779: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:40.374: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:40.647: INFO: Number of nodes with available pods: 0
Aug 26 15:13:40.647: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:41.649: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:41.950: INFO: Number of nodes with available pods: 2
Aug 26 15:13:41.950: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 26 15:13:43.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:43.823: INFO: Number of nodes with available pods: 1
Aug 26 15:13:43.823: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:45.422: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:46.174: INFO: Number of nodes with available pods: 1
Aug 26 15:13:46.174: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:47.086: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:47.936: INFO: Number of nodes with available pods: 1
Aug 26 15:13:47.936: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:49.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:50.307: INFO: Number of nodes with available pods: 1
Aug 26 15:13:50.307: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:51.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:51.258: INFO: Number of nodes with available pods: 1
Aug 26 15:13:51.258: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:51.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:52.286: INFO: Number of nodes with available pods: 1
Aug 26 15:13:52.286: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:52.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:52.835: INFO: Number of nodes with available pods: 1
Aug 26 15:13:52.835: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:13:54.135: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:13:54.498: INFO: Number of nodes with available pods: 2
Aug 26 15:13:54.498: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9206, will wait for the garbage collector to delete the pods
Aug 26 15:13:54.792: INFO: Deleting DaemonSet.extensions daemon-set took: 185.38909ms
Aug 26 15:13:55.293: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.650352ms
Aug 26 15:14:11.989: INFO: Number of nodes with available pods: 0
Aug 26 15:14:11.989: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 15:14:11.993: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9206/daemonsets","resourceVersion":"3911980"},"items":null}

Aug 26 15:14:11.996: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9206/pods","resourceVersion":"3911980"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:12.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9206" for this suite.

• [SLOW TEST:42.358 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":165,"skipped":2688,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:12.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 26 15:14:13.329: INFO: Waiting up to 5m0s for pod "pod-b4cd18de-ca24-468f-8286-80113b61379a" in namespace "emptydir-9583" to be "success or failure"
Aug 26 15:14:13.409: INFO: Pod "pod-b4cd18de-ca24-468f-8286-80113b61379a": Phase="Pending", Reason="", readiness=false. Elapsed: 79.142654ms
Aug 26 15:14:15.499: INFO: Pod "pod-b4cd18de-ca24-468f-8286-80113b61379a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16948586s
Aug 26 15:14:17.540: INFO: Pod "pod-b4cd18de-ca24-468f-8286-80113b61379a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210546833s
Aug 26 15:14:19.545: INFO: Pod "pod-b4cd18de-ca24-468f-8286-80113b61379a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215584893s
STEP: Saw pod success
Aug 26 15:14:19.545: INFO: Pod "pod-b4cd18de-ca24-468f-8286-80113b61379a" satisfied condition "success or failure"
Aug 26 15:14:19.549: INFO: Trying to get logs from node jerma-worker pod pod-b4cd18de-ca24-468f-8286-80113b61379a container test-container: 
STEP: delete the pod
Aug 26 15:14:19.599: INFO: Waiting for pod pod-b4cd18de-ca24-468f-8286-80113b61379a to disappear
Aug 26 15:14:19.619: INFO: Pod pod-b4cd18de-ca24-468f-8286-80113b61379a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:19.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9583" for this suite.

• [SLOW TEST:7.466 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:19.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:19.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-620" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":167,"skipped":2753,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:19.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:28.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8613" for this suite.

• [SLOW TEST:9.045 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:29.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 26 15:14:29.593: INFO: created pod pod-service-account-defaultsa
Aug 26 15:14:29.593: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 26 15:14:29.597: INFO: created pod pod-service-account-mountsa
Aug 26 15:14:29.597: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 26 15:14:29.624: INFO: created pod pod-service-account-nomountsa
Aug 26 15:14:29.624: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 26 15:14:29.655: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 26 15:14:29.655: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 26 15:14:29.720: INFO: created pod pod-service-account-mountsa-mountspec
Aug 26 15:14:29.720: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 26 15:14:29.744: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 26 15:14:29.744: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 26 15:14:29.787: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 26 15:14:29.787: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 26 15:14:29.851: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 26 15:14:29.852: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 26 15:14:29.883: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 26 15:14:29.883: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:29.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-834" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":169,"skipped":2807,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:29.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:14:44.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3635" for this suite.

• [SLOW TEST:15.319 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":170,"skipped":2815,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:14:45.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 15:14:59.964: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 15:15:00.337: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 15:15:02.337: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 15:15:02.511: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 15:15:04.337: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 15:15:04.610: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 15:15:06.337: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 15:15:06.385: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:15:06.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5445" for this suite.

• [SLOW TEST:21.552 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2863,"failed":0}
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:15:06.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 26 15:15:20.768: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 26 15:15:32.144: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:15:32.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6416" for this suite.

• [SLOW TEST:26.340 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":172,"skipped":2863,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:15:33.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 26 15:15:34.362: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 26 15:15:34.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:37.937: INFO: stderr: ""
Aug 26 15:15:37.937: INFO: stdout: "service/agnhost-slave created\n"
Aug 26 15:15:37.938: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 26 15:15:37.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:42.596: INFO: stderr: ""
Aug 26 15:15:42.597: INFO: stdout: "service/agnhost-master created\n"
Aug 26 15:15:42.598: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 26 15:15:42.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:45.670: INFO: stderr: ""
Aug 26 15:15:45.670: INFO: stdout: "service/frontend created\n"
Aug 26 15:15:45.672: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 26 15:15:45.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:48.561: INFO: stderr: ""
Aug 26 15:15:48.561: INFO: stdout: "deployment.apps/frontend created\n"
Aug 26 15:15:48.563: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 15:15:48.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:51.485: INFO: stderr: ""
Aug 26 15:15:51.485: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 26 15:15:51.486: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 15:15:51.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3785'
Aug 26 15:15:54.966: INFO: stderr: ""
Aug 26 15:15:54.966: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 26 15:15:54.966: INFO: Waiting for all frontend pods to be Running.
Aug 26 15:16:10.020: INFO: Waiting for frontend to serve content.
Aug 26 15:16:10.649: INFO: Trying to add a new entry to the guestbook.
Aug 26 15:16:11.047: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 26 15:16:11.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:12.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:12.994: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 15:16:12.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:14.514: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:14.514: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 15:16:14.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:16.308: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:16.309: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 15:16:16.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:17.692: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:17.693: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 15:16:17.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:18.934: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:18.934: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 15:16:18.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3785'
Aug 26 15:16:21.075: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 15:16:21.075: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:16:21.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3785" for this suite.

• [SLOW TEST:48.419 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":173,"skipped":2895,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:16:21.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:16:24.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107" in namespace "projected-1862" to be "success or failure"
Aug 26 15:16:25.443: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Pending", Reason="", readiness=false. Elapsed: 779.379672ms
Aug 26 15:16:27.552: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889258971s
Aug 26 15:16:29.609: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.945938193s
Aug 26 15:16:32.057: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Pending", Reason="", readiness=false. Elapsed: 7.393763588s
Aug 26 15:16:34.183: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Running", Reason="", readiness=true. Elapsed: 9.520190909s
Aug 26 15:16:36.194: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.53103371s
STEP: Saw pod success
Aug 26 15:16:36.195: INFO: Pod "downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107" satisfied condition "success or failure"
Aug 26 15:16:36.200: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107 container client-container: 
STEP: delete the pod
Aug 26 15:16:36.279: INFO: Waiting for pod downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107 to disappear
Aug 26 15:16:36.288: INFO: Pod downwardapi-volume-070f4cda-a3d7-4522-a8cb-46aac1e62107 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:16:36.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1862" for this suite.

• [SLOW TEST:14.662 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2913,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:16:36.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0826 15:16:50.095192       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 15:16:50.095: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:16:50.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-370" for this suite.

• [SLOW TEST:14.237 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":175,"skipped":2922,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:16:50.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:16:51.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620" in namespace "downward-api-510" to be "success or failure"
Aug 26 15:16:51.087: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620": Phase="Pending", Reason="", readiness=false. Elapsed: 53.009854ms
Aug 26 15:16:53.114: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079912912s
Aug 26 15:16:55.249: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215106972s
Aug 26 15:16:57.433: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399620474s
Aug 26 15:16:59.739: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.705326991s
STEP: Saw pod success
Aug 26 15:16:59.739: INFO: Pod "downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620" satisfied condition "success or failure"
Aug 26 15:16:59.743: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620 container client-container: 
STEP: delete the pod
Aug 26 15:17:00.465: INFO: Waiting for pod downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620 to disappear
Aug 26 15:17:00.761: INFO: Pod downwardapi-volume-3c8783c9-6a03-40fd-8992-d6670b724620 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:17:00.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-510" for this suite.

• [SLOW TEST:10.504 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2924,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:17:01.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 15:17:02.540: INFO: Waiting up to 5m0s for pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14" in namespace "emptydir-4530" to be "success or failure"
Aug 26 15:17:02.660: INFO: Pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14": Phase="Pending", Reason="", readiness=false. Elapsed: 119.815752ms
Aug 26 15:17:05.222: INFO: Pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681311904s
Aug 26 15:17:07.227: INFO: Pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.686461668s
Aug 26 15:17:09.257: INFO: Pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.716405906s
STEP: Saw pod success
Aug 26 15:17:09.257: INFO: Pod "pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14" satisfied condition "success or failure"
Aug 26 15:17:09.261: INFO: Trying to get logs from node jerma-worker pod pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14 container test-container: 
STEP: delete the pod
Aug 26 15:17:09.877: INFO: Waiting for pod pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14 to disappear
Aug 26 15:17:10.289: INFO: Pod pod-3e86ce4d-12ae-4a9d-a540-9b96d05b8f14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:17:10.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4530" for this suite.

• [SLOW TEST:9.530 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2938,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:17:10.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-98d5ac50-19b2-4116-941d-9b7866cf3f71
STEP: Creating a pod to test consume secrets
Aug 26 15:17:14.676: INFO: Waiting up to 5m0s for pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4" in namespace "secrets-339" to be "success or failure"
Aug 26 15:17:14.753: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 76.475411ms
Aug 26 15:17:17.560: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.883250147s
Aug 26 15:17:19.590: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.913393276s
Aug 26 15:17:21.596: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.919081346s
Aug 26 15:17:23.769: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.09290091s
Aug 26 15:17:25.839: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.162719179s
STEP: Saw pod success
Aug 26 15:17:25.840: INFO: Pod "pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4" satisfied condition "success or failure"
Aug 26 15:17:25.893: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4 container secret-env-test: 
STEP: delete the pod
Aug 26 15:17:25.987: INFO: Waiting for pod pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4 to disappear
Aug 26 15:17:26.017: INFO: Pod pod-secrets-d2fb5b50-2f85-4fff-a30f-35602f72d6d4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:17:26.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-339" for this suite.

• [SLOW TEST:15.457 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2947,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:17:26.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-lnlk
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 15:17:26.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lnlk" in namespace "subpath-4895" to be "success or failure"
Aug 26 15:17:26.756: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.475069ms
Aug 26 15:17:28.959: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220673258s
Aug 26 15:17:30.965: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227193676s
Aug 26 15:17:33.056: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 6.317987219s
Aug 26 15:17:35.154: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 8.415352782s
Aug 26 15:17:37.159: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 10.420572659s
Aug 26 15:17:39.385: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 12.647321123s
Aug 26 15:17:41.392: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 14.653635742s
Aug 26 15:17:43.596: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 16.857779256s
Aug 26 15:17:45.629: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 18.890864705s
Aug 26 15:17:47.865: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 21.126747502s
Aug 26 15:17:49.872: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 23.133854289s
Aug 26 15:17:51.877: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Running", Reason="", readiness=true. Elapsed: 25.139313028s
Aug 26 15:17:54.001: INFO: Pod "pod-subpath-test-secret-lnlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.263103169s
STEP: Saw pod success
Aug 26 15:17:54.002: INFO: Pod "pod-subpath-test-secret-lnlk" satisfied condition "success or failure"
Aug 26 15:17:54.008: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-lnlk container test-container-subpath-secret-lnlk: 
STEP: delete the pod
Aug 26 15:17:54.063: INFO: Waiting for pod pod-subpath-test-secret-lnlk to disappear
Aug 26 15:17:54.194: INFO: Pod pod-subpath-test-secret-lnlk no longer exists
STEP: Deleting pod pod-subpath-test-secret-lnlk
Aug 26 15:17:54.194: INFO: Deleting pod "pod-subpath-test-secret-lnlk" in namespace "subpath-4895"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:17:54.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4895" for this suite.

• [SLOW TEST:28.416 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":179,"skipped":2949,"failed":0}
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:17:54.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 26 15:17:55.079: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 26 15:18:01.585: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 26 15:18:07.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:18:09.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:18:11.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:18:14.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051881, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:18:17.644: INFO: Waited 947.938811ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:18:28.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5816" for this suite.

• [SLOW TEST:33.673 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":180,"skipped":2951,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:18:28.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-f04ee0cd-8626-4583-af84-5ff665643d85
STEP: Creating a pod to test consume configMaps
Aug 26 15:18:30.540: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb" in namespace "projected-4260" to be "success or failure"
Aug 26 15:18:31.034: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 493.216196ms
Aug 26 15:18:33.040: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499479729s
Aug 26 15:18:35.170: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629057134s
Aug 26 15:18:37.335: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79434479s
Aug 26 15:18:39.770: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22985436s
Aug 26 15:18:41.944: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.403009093s
Aug 26 15:18:44.854: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.313411581s
Aug 26 15:18:47.165: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Running", Reason="", readiness=true. Elapsed: 16.624487169s
Aug 26 15:18:49.170: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.629607641s
STEP: Saw pod success
Aug 26 15:18:49.170: INFO: Pod "pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb" satisfied condition "success or failure"
Aug 26 15:18:49.175: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 15:18:49.272: INFO: Waiting for pod pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb to disappear
Aug 26 15:18:49.312: INFO: Pod pod-projected-configmaps-17552ab9-7e03-44b1-bfaa-daae3b51c8bb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:18:49.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4260" for this suite.

• [SLOW TEST:21.200 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2952,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:18:49.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:19:02.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:19:04.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051943, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:19:07.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051943, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:19:08.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051943, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734051942, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:19:12.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:19:19.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6052" for this suite.
STEP: Destroying namespace "webhook-6052-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:31.722 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":182,"skipped":2958,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:19:21.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6451
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-6451
I0826 15:19:24.951526       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6451, replica count: 2
I0826 15:19:28.003499       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 15:19:31.004285       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 15:19:34.005212       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 15:19:37.006048       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 15:19:37.006: INFO: Creating new exec pod
Aug 26 15:19:46.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6451 execpodqgdjp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 26 15:19:47.927: INFO: stderr: "I0826 15:19:47.841912    2862 log.go:172] (0x2600690) (0x2600850) Create stream\nI0826 15:19:47.843674    2862 log.go:172] (0x2600690) (0x2600850) Stream added, broadcasting: 1\nI0826 15:19:47.852989    2862 log.go:172] (0x2600690) Reply frame received for 1\nI0826 15:19:47.853550    2862 log.go:172] (0x2600690) (0x2be7f80) Create stream\nI0826 15:19:47.853627    2862 log.go:172] (0x2600690) (0x2be7f80) Stream added, broadcasting: 3\nI0826 15:19:47.855057    2862 log.go:172] (0x2600690) Reply frame received for 3\nI0826 15:19:47.855306    2862 log.go:172] (0x2600690) (0x27c4070) Create stream\nI0826 15:19:47.855374    2862 log.go:172] (0x2600690) (0x27c4070) Stream added, broadcasting: 5\nI0826 15:19:47.856452    2862 log.go:172] (0x2600690) Reply frame received for 5\nI0826 15:19:47.911555    2862 log.go:172] (0x2600690) Data frame received for 5\nI0826 15:19:47.911880    2862 log.go:172] (0x27c4070) (5) Data frame handling\nI0826 15:19:47.912026    2862 log.go:172] (0x2600690) Data frame received for 3\nI0826 15:19:47.912110    2862 log.go:172] (0x2be7f80) (3) Data frame handling\nI0826 15:19:47.912295    2862 log.go:172] (0x27c4070) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0826 15:19:47.912831    2862 log.go:172] (0x2600690) Data frame received for 5\nI0826 15:19:47.912937    2862 log.go:172] (0x27c4070) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0826 15:19:47.913069    2862 log.go:172] (0x2600690) Data frame received for 1\nI0826 15:19:47.913191    2862 log.go:172] (0x2600850) (1) Data frame handling\nI0826 15:19:47.913327    2862 log.go:172] (0x27c4070) (5) Data frame sent\nI0826 15:19:47.913500    2862 log.go:172] (0x2600690) Data frame received for 5\nI0826 15:19:47.913590    2862 log.go:172] (0x27c4070) (5) Data frame handling\nI0826 15:19:47.913776    2862 log.go:172] (0x2600850) (1) Data frame sent\nI0826 15:19:47.914813    2862 log.go:172] (0x2600690) (0x2600850) Stream removed, broadcasting: 1\nI0826 15:19:47.915581    2862 log.go:172] (0x2600690) Go away received\nI0826 15:19:47.917829    2862 log.go:172] (0x2600690) (0x2600850) Stream removed, broadcasting: 1\nI0826 15:19:47.917956    2862 log.go:172] (0x2600690) (0x2be7f80) Stream removed, broadcasting: 3\nI0826 15:19:47.918074    2862 log.go:172] (0x2600690) (0x27c4070) Stream removed, broadcasting: 5\n"
Aug 26 15:19:47.928: INFO: stdout: ""
Aug 26 15:19:47.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6451 execpodqgdjp -- /bin/sh -x -c nc -zv -t -w 2 10.103.123.112 80'
Aug 26 15:19:49.484: INFO: stderr: "I0826 15:19:49.380234    2883 log.go:172] (0x28ce000) (0x28ce070) Create stream\nI0826 15:19:49.382824    2883 log.go:172] (0x28ce000) (0x28ce070) Stream added, broadcasting: 1\nI0826 15:19:49.391870    2883 log.go:172] (0x28ce000) Reply frame received for 1\nI0826 15:19:49.392382    2883 log.go:172] (0x28ce000) (0x28e8070) Create stream\nI0826 15:19:49.392462    2883 log.go:172] (0x28ce000) (0x28e8070) Stream added, broadcasting: 3\nI0826 15:19:49.393948    2883 log.go:172] (0x28ce000) Reply frame received for 3\nI0826 15:19:49.394261    2883 log.go:172] (0x28ce000) (0x28e8230) Create stream\nI0826 15:19:49.394374    2883 log.go:172] (0x28ce000) (0x28e8230) Stream added, broadcasting: 5\nI0826 15:19:49.395711    2883 log.go:172] (0x28ce000) Reply frame received for 5\nI0826 15:19:49.465649    2883 log.go:172] (0x28ce000) Data frame received for 3\nI0826 15:19:49.466108    2883 log.go:172] (0x28ce000) Data frame received for 1\nI0826 15:19:49.466246    2883 log.go:172] (0x28ce070) (1) Data frame handling\nI0826 15:19:49.466462    2883 log.go:172] (0x28ce000) Data frame received for 5\nI0826 15:19:49.466603    2883 log.go:172] (0x28e8230) (5) Data frame handling\nI0826 15:19:49.466706    2883 log.go:172] (0x28e8070) (3) Data frame handling\nI0826 15:19:49.467477    2883 log.go:172] (0x28ce070) (1) Data frame sent\nI0826 15:19:49.468173    2883 log.go:172] (0x28e8230) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.123.112 80\nConnection to 10.103.123.112 80 port [tcp/http] succeeded!\nI0826 15:19:49.468271    2883 log.go:172] (0x28ce000) Data frame received for 5\nI0826 15:19:49.468393    2883 log.go:172] (0x28e8230) (5) Data frame handling\nI0826 15:19:49.470043    2883 log.go:172] (0x28ce000) (0x28ce070) Stream removed, broadcasting: 1\nI0826 15:19:49.470300    2883 log.go:172] (0x28ce000) Go away received\nI0826 15:19:49.473701    2883 log.go:172] (0x28ce000) (0x28ce070) Stream removed, broadcasting: 1\nI0826 15:19:49.473897    2883 log.go:172] (0x28ce000) (0x28e8070) Stream removed, broadcasting: 3\nI0826 15:19:49.474071    2883 log.go:172] (0x28ce000) (0x28e8230) Stream removed, broadcasting: 5\n"
Aug 26 15:19:49.485: INFO: stdout: ""
Aug 26 15:19:49.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6451 execpodqgdjp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31065'
Aug 26 15:19:50.868: INFO: stderr: "I0826 15:19:50.769894    2906 log.go:172] (0x2c32000) (0x2c32070) Create stream\nI0826 15:19:50.772443    2906 log.go:172] (0x2c32000) (0x2c32070) Stream added, broadcasting: 1\nI0826 15:19:50.789218    2906 log.go:172] (0x2c32000) Reply frame received for 1\nI0826 15:19:50.790012    2906 log.go:172] (0x2c32000) (0x25e2310) Create stream\nI0826 15:19:50.790180    2906 log.go:172] (0x2c32000) (0x25e2310) Stream added, broadcasting: 3\nI0826 15:19:50.792398    2906 log.go:172] (0x2c32000) Reply frame received for 3\nI0826 15:19:50.792671    2906 log.go:172] (0x2c32000) (0x2c32310) Create stream\nI0826 15:19:50.792802    2906 log.go:172] (0x2c32000) (0x2c32310) Stream added, broadcasting: 5\nI0826 15:19:50.794437    2906 log.go:172] (0x2c32000) Reply frame received for 5\nI0826 15:19:50.850953    2906 log.go:172] (0x2c32000) Data frame received for 5\nI0826 15:19:50.851245    2906 log.go:172] (0x2c32310) (5) Data frame handling\nI0826 15:19:50.851555    2906 log.go:172] (0x2c32000) Data frame received for 3\nI0826 15:19:50.851710    2906 log.go:172] (0x25e2310) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31065\nConnection to 172.18.0.6 31065 port [tcp/31065] succeeded!\nI0826 15:19:50.851963    2906 log.go:172] (0x2c32310) (5) Data frame sent\nI0826 15:19:50.853390    2906 log.go:172] (0x2c32000) Data frame received for 5\nI0826 15:19:50.853497    2906 log.go:172] (0x2c32310) (5) Data frame handling\nI0826 15:19:50.854833    2906 log.go:172] (0x2c32000) Data frame received for 1\nI0826 15:19:50.854896    2906 log.go:172] (0x2c32070) (1) Data frame handling\nI0826 15:19:50.854953    2906 log.go:172] (0x2c32070) (1) Data frame sent\nI0826 15:19:50.855374    2906 log.go:172] (0x2c32000) (0x2c32070) Stream removed, broadcasting: 1\nI0826 15:19:50.856894    2906 log.go:172] (0x2c32000) Go away received\nI0826 15:19:50.858960    2906 log.go:172] (0x2c32000) (0x2c32070) Stream removed, broadcasting: 1\nI0826 15:19:50.859224    2906 log.go:172] (0x2c32000) (0x25e2310) Stream removed, broadcasting: 3\nI0826 15:19:50.859587    2906 log.go:172] (0x2c32000) (0x2c32310) Stream removed, broadcasting: 5\n"
Aug 26 15:19:50.869: INFO: stdout: ""
Aug 26 15:19:50.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6451 execpodqgdjp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31065'
Aug 26 15:19:52.253: INFO: stderr: "I0826 15:19:52.136245    2930 log.go:172] (0x295a150) (0x295a1c0) Create stream\nI0826 15:19:52.138843    2930 log.go:172] (0x295a150) (0x295a1c0) Stream added, broadcasting: 1\nI0826 15:19:52.163063    2930 log.go:172] (0x295a150) Reply frame received for 1\nI0826 15:19:52.163556    2930 log.go:172] (0x295a150) (0x295a230) Create stream\nI0826 15:19:52.163619    2930 log.go:172] (0x295a150) (0x295a230) Stream added, broadcasting: 3\nI0826 15:19:52.165008    2930 log.go:172] (0x295a150) Reply frame received for 3\nI0826 15:19:52.165213    2930 log.go:172] (0x295a150) (0x2c58070) Create stream\nI0826 15:19:52.165273    2930 log.go:172] (0x295a150) (0x2c58070) Stream added, broadcasting: 5\nI0826 15:19:52.166182    2930 log.go:172] (0x295a150) Reply frame received for 5\nI0826 15:19:52.231046    2930 log.go:172] (0x295a150) Data frame received for 5\nI0826 15:19:52.231310    2930 log.go:172] (0x295a150) Data frame received for 3\nI0826 15:19:52.231467    2930 log.go:172] (0x295a230) (3) Data frame handling\nI0826 15:19:52.231575    2930 log.go:172] (0x2c58070) (5) Data frame handling\nI0826 15:19:52.231999    2930 log.go:172] (0x295a150) Data frame received for 1\nI0826 15:19:52.232143    2930 log.go:172] (0x295a1c0) (1) Data frame handling\nI0826 15:19:52.233200    2930 log.go:172] (0x295a1c0) (1) Data frame sent\nI0826 15:19:52.233317    2930 log.go:172] (0x2c58070) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31065\nConnection to 172.18.0.3 31065 port [tcp/31065] succeeded!\nI0826 15:19:52.234784    2930 log.go:172] (0x295a150) Data frame received for 5\nI0826 15:19:52.234899    2930 log.go:172] (0x2c58070) (5) Data frame handling\nI0826 15:19:52.236401    2930 log.go:172] (0x295a150) (0x295a1c0) Stream removed, broadcasting: 1\nI0826 15:19:52.236984    2930 log.go:172] (0x295a150) Go away received\nI0826 15:19:52.239421    2930 log.go:172] (0x295a150) (0x295a1c0) Stream removed, broadcasting: 1\nI0826 15:19:52.239881    2930 log.go:172] (0x295a150) (0x295a230) Stream removed, broadcasting: 3\nI0826 15:19:52.240052    2930 log.go:172] (0x295a150) (0x2c58070) Stream removed, broadcasting: 5\n"
Aug 26 15:19:52.254: INFO: stdout: ""
Aug 26 15:19:52.255: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:19:52.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6451" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:31.470 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":183,"skipped":2958,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:19:52.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 15:19:52.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6242'
Aug 26 15:19:53.783: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 15:19:53.784: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 26 15:19:53.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6242'
Aug 26 15:19:55.019: INFO: stderr: ""
Aug 26 15:19:55.019: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:19:55.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6242" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":184,"skipped":2960,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:19:55.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 15:19:55.093: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:20:10.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2479" for this suite.

• [SLOW TEST:15.841 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":185,"skipped":2971,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:20:10.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 26 15:20:11.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3805'
Aug 26 15:20:14.277: INFO: stderr: ""
Aug 26 15:20:14.277: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 15:20:16.078: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:16.078: INFO: Found 0 / 1
Aug 26 15:20:16.488: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:16.488: INFO: Found 0 / 1
Aug 26 15:20:17.741: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:17.741: INFO: Found 0 / 1
Aug 26 15:20:18.915: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:18.915: INFO: Found 0 / 1
Aug 26 15:20:19.399: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:19.399: INFO: Found 0 / 1
Aug 26 15:20:20.447: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:20.447: INFO: Found 0 / 1
Aug 26 15:20:21.466: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:21.466: INFO: Found 0 / 1
Aug 26 15:20:22.284: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:22.285: INFO: Found 1 / 1
Aug 26 15:20:22.285: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 26 15:20:22.290: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:22.290: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 15:20:22.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-hm8kk --namespace=kubectl-3805 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 26 15:20:23.583: INFO: stderr: ""
Aug 26 15:20:23.584: INFO: stdout: "pod/agnhost-master-hm8kk patched\n"
STEP: checking annotations
Aug 26 15:20:23.692: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:20:23.692: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:20:23.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3805" for this suite.

• [SLOW TEST:12.828 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":186,"skipped":2981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:20:23.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 26 15:20:24.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:22:10.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6317" for this suite.

• [SLOW TEST:106.720 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":187,"skipped":3036,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:22:10.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 15:22:11.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5714'
Aug 26 15:22:16.824: INFO: stderr: ""
Aug 26 15:22:16.824: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 26 15:22:21.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5714 -o json'
Aug 26 15:22:23.052: INFO: stderr: ""
Aug 26 15:22:23.052: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-26T15:22:16Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-5714\",\n        \"resourceVersion\": \"3914268\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-5714/pods/e2e-test-httpd-pod\",\n        \"uid\": \"6650d768-51d0-4586-9275-42d868e6072e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-6gwz5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-6gwz5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-6gwz5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T15:22:16Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T15:22:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T15:22:21Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T15:22:16Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1bff188000859fbf3c7ebe75fe581a4c946d500f84201832d76c291e4ceeb0a1\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-26T15:22:20Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.156\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.156\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-26T15:22:16Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 26 15:22:23.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5714'
Aug 26 15:22:26.012: INFO: stderr: ""
Aug 26 15:22:26.012: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 26 15:22:26.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5714'
Aug 26 15:22:41.602: INFO: stderr: ""
Aug 26 15:22:41.603: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:22:41.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5714" for this suite.

• [SLOW TEST:31.191 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":188,"skipped":3046,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:22:41.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 15:22:41.723: INFO: Waiting up to 5m0s for pod "pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74" in namespace "emptydir-4782" to be "success or failure"
Aug 26 15:22:41.771: INFO: Pod "pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74": Phase="Pending", Reason="", readiness=false. Elapsed: 46.911594ms
Aug 26 15:22:43.776: INFO: Pod "pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051805071s
Aug 26 15:22:45.781: INFO: Pod "pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057407528s
STEP: Saw pod success
Aug 26 15:22:45.781: INFO: Pod "pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74" satisfied condition "success or failure"
Aug 26 15:22:45.788: INFO: Trying to get logs from node jerma-worker pod pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74 container test-container: 
STEP: delete the pod
Aug 26 15:22:45.831: INFO: Waiting for pod pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74 to disappear
Aug 26 15:22:45.837: INFO: Pod pod-6ec684bd-c9e9-4780-b798-408a4f8a5b74 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:22:45.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4782" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3062,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:22:45.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:22:46.047: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-4475
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4475
STEP: Deleting pre-stop pod
Aug 26 15:22:59.376: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:22:59.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4475" for this suite.

• [SLOW TEST:13.314 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":191,"skipped":3124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:22:59.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 15:23:07.958: INFO: DNS probes using dns-3288/dns-test-ce4a98aa-a0a9-4a0e-ba06-a194244f4f72 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:23:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3288" for this suite.

• [SLOW TEST:8.669 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":192,"skipped":3148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:23:08.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5196
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-5196
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5196
Aug 26 15:23:08.940: INFO: Found 0 stateful pods, waiting for 1
Aug 26 15:23:18.999: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 26 15:23:19.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:23:20.593: INFO: stderr: "I0826 15:23:20.444232    3149 log.go:172] (0x25ff570) (0x25ffb90) Create stream\nI0826 15:23:20.446426    3149 log.go:172] (0x25ff570) (0x25ffb90) Stream added, broadcasting: 1\nI0826 15:23:20.458026    3149 log.go:172] (0x25ff570) Reply frame received for 1\nI0826 15:23:20.458531    3149 log.go:172] (0x25ff570) (0x2ca4070) Create stream\nI0826 15:23:20.458625    3149 log.go:172] (0x25ff570) (0x2ca4070) Stream added, broadcasting: 3\nI0826 15:23:20.460067    3149 log.go:172] (0x25ff570) Reply frame received for 3\nI0826 15:23:20.460381    3149 log.go:172] (0x25ff570) (0x2b5e0e0) Create stream\nI0826 15:23:20.460454    3149 log.go:172] (0x25ff570) (0x2b5e0e0) Stream added, broadcasting: 5\nI0826 15:23:20.461776    3149 log.go:172] (0x25ff570) Reply frame received for 5\nI0826 15:23:20.509523    3149 log.go:172] (0x25ff570) Data frame received for 5\nI0826 15:23:20.509752    3149 log.go:172] (0x2b5e0e0) (5) Data frame handling\nI0826 15:23:20.510143    3149 log.go:172] (0x2b5e0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:23:20.574534    3149 log.go:172] (0x25ff570) Data frame received for 3\nI0826 15:23:20.574698    3149 log.go:172] (0x25ff570) Data frame received for 5\nI0826 15:23:20.574877    3149 log.go:172] (0x2b5e0e0) (5) Data frame handling\nI0826 15:23:20.575146    3149 log.go:172] (0x2ca4070) (3) Data frame handling\nI0826 15:23:20.575450    3149 log.go:172] (0x2ca4070) (3) Data frame sent\nI0826 15:23:20.575604    3149 log.go:172] (0x25ff570) Data frame received for 3\nI0826 15:23:20.575733    3149 log.go:172] (0x2ca4070) (3) Data frame handling\nI0826 15:23:20.575967    3149 log.go:172] (0x25ff570) Data frame received for 1\nI0826 15:23:20.576051    3149 log.go:172] (0x25ffb90) (1) Data frame handling\nI0826 15:23:20.576156    3149 log.go:172] (0x25ffb90) (1) Data frame sent\nI0826 15:23:20.577389    3149 log.go:172] (0x25ff570) (0x25ffb90) Stream removed, broadcasting: 1\nI0826 15:23:20.578603    3149 log.go:172] (0x25ff570) Go away received\nI0826 15:23:20.580351    3149 log.go:172] (0x25ff570) (0x25ffb90) Stream removed, broadcasting: 1\nI0826 15:23:20.580925    3149 log.go:172] (0x25ff570) (0x2ca4070) Stream removed, broadcasting: 3\nI0826 15:23:20.581319    3149 log.go:172] (0x25ff570) (0x2b5e0e0) Stream removed, broadcasting: 5\n"
Aug 26 15:23:20.594: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:23:20.594: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:23:20.599: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 15:23:30.658: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:23:30.658: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:23:30.689: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:23:30.690: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:23:30.691: INFO: ss-1                 Pending         []
Aug 26 15:23:30.691: INFO: 
Aug 26 15:23:30.691: INFO: StatefulSet ss has not reached scale 3, at 2
Aug 26 15:23:31.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98796891s
Aug 26 15:23:32.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.938014068s
Aug 26 15:23:33.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.911816075s
Aug 26 15:23:34.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.730584827s
Aug 26 15:23:36.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.724539533s
Aug 26 15:23:37.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.643106785s
Aug 26 15:23:38.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.61732284s
Aug 26 15:23:39.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.611568149s
Aug 26 15:23:40.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 605.928677ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5196
Aug 26 15:23:41.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:23:43.423: INFO: stderr: "I0826 15:23:42.766369    3171 log.go:172] (0x261cb60) (0x261ce00) Create stream\nI0826 15:23:42.769647    3171 log.go:172] (0x261cb60) (0x261ce00) Stream added, broadcasting: 1\nI0826 15:23:42.780308    3171 log.go:172] (0x261cb60) Reply frame received for 1\nI0826 15:23:42.781151    3171 log.go:172] (0x261cb60) (0x24b8700) Create stream\nI0826 15:23:42.781256    3171 log.go:172] (0x261cb60) (0x24b8700) Stream added, broadcasting: 3\nI0826 15:23:42.782764    3171 log.go:172] (0x261cb60) Reply frame received for 3\nI0826 15:23:42.782951    3171 log.go:172] (0x261cb60) (0x261d420) Create stream\nI0826 15:23:42.783010    3171 log.go:172] (0x261cb60) (0x261d420) Stream added, broadcasting: 5\nI0826 15:23:42.784294    3171 log.go:172] (0x261cb60) Reply frame received for 5\nI0826 15:23:42.852675    3171 log.go:172] (0x261cb60) Data frame received for 5\nI0826 15:23:42.852864    3171 log.go:172] (0x261d420) (5) Data frame handling\nI0826 15:23:42.853194    3171 log.go:172] (0x261d420) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 15:23:43.407145    3171 log.go:172] (0x261cb60) Data frame received for 5\nI0826 15:23:43.407363    3171 log.go:172] (0x261d420) (5) Data frame handling\nI0826 15:23:43.408473    3171 log.go:172] (0x261cb60) Data frame received for 3\nI0826 15:23:43.408647    3171 log.go:172] (0x24b8700) (3) Data frame handling\nI0826 15:23:43.408945    3171 log.go:172] (0x24b8700) (3) Data frame sent\nI0826 15:23:43.409158    3171 log.go:172] (0x261cb60) Data frame received for 3\nI0826 15:23:43.409322    3171 log.go:172] (0x24b8700) (3) Data frame handling\nI0826 15:23:43.409493    3171 log.go:172] (0x261cb60) Data frame received for 1\nI0826 15:23:43.409660    3171 log.go:172] (0x261ce00) (1) Data frame handling\nI0826 15:23:43.409848    3171 log.go:172] (0x261ce00) (1) Data frame sent\nI0826 15:23:43.411213    3171 log.go:172] (0x261cb60) (0x261ce00) Stream removed, broadcasting: 1\nI0826 15:23:43.413321    3171 log.go:172] (0x261cb60) Go away received\nI0826 15:23:43.415738    3171 log.go:172] (0x261cb60) (0x261ce00) Stream removed, broadcasting: 1\nI0826 15:23:43.415980    3171 log.go:172] (0x261cb60) (0x24b8700) Stream removed, broadcasting: 3\nI0826 15:23:43.416217    3171 log.go:172] (0x261cb60) (0x261d420) Stream removed, broadcasting: 5\n"
Aug 26 15:23:43.423: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:23:43.423: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:23:43.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:23:44.950: INFO: stderr: "I0826 15:23:44.842204    3191 log.go:172] (0x2705a40) (0x2705c00) Create stream\nI0826 15:23:44.844347    3191 log.go:172] (0x2705a40) (0x2705c00) Stream added, broadcasting: 1\nI0826 15:23:44.853169    3191 log.go:172] (0x2705a40) Reply frame received for 1\nI0826 15:23:44.853707    3191 log.go:172] (0x2705a40) (0x2ccaa80) Create stream\nI0826 15:23:44.853774    3191 log.go:172] (0x2705a40) (0x2ccaa80) Stream added, broadcasting: 3\nI0826 15:23:44.855398    3191 log.go:172] (0x2705a40) Reply frame received for 3\nI0826 15:23:44.855873    3191 log.go:172] (0x2705a40) (0x24b82a0) Create stream\nI0826 15:23:44.855994    3191 log.go:172] (0x2705a40) (0x24b82a0) Stream added, broadcasting: 5\nI0826 15:23:44.857813    3191 log.go:172] (0x2705a40) Reply frame received for 5\nI0826 15:23:44.932708    3191 log.go:172] (0x2705a40) Data frame received for 5\nI0826 15:23:44.932986    3191 log.go:172] (0x2705a40) Data frame received for 3\nI0826 15:23:44.933069    3191 log.go:172] (0x2ccaa80) (3) Data frame handling\nI0826 15:23:44.933254    3191 log.go:172] (0x24b82a0) (5) Data frame handling\nI0826 15:23:44.933496    3191 log.go:172] (0x2705a40) Data frame received for 1\nI0826 15:23:44.933565    3191 log.go:172] (0x2705c00) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 15:23:44.934175    3191 log.go:172] (0x2705c00) (1) Data frame sent\nI0826 15:23:44.934317    3191 log.go:172] (0x24b82a0) (5) Data frame sent\nI0826 15:23:44.934532    3191 log.go:172] (0x2ccaa80) (3) Data frame sent\nI0826 15:23:44.934643    3191 log.go:172] (0x2705a40) Data frame received for 3\nI0826 15:23:44.934699    3191 log.go:172] (0x2ccaa80) (3) Data frame handling\nI0826 15:23:44.934763    3191 log.go:172] (0x2705a40) Data frame received for 5\nI0826 15:23:44.934835    3191 log.go:172] (0x24b82a0) (5) Data frame handling\nI0826 15:23:44.936063    3191 log.go:172] (0x2705a40) (0x2705c00) Stream removed, broadcasting: 1\nI0826 15:23:44.937841    3191 log.go:172] (0x2705a40) Go away received\nI0826 15:23:44.940335    3191 log.go:172] (0x2705a40) (0x2705c00) Stream removed, broadcasting: 1\nI0826 15:23:44.940830    3191 log.go:172] (0x2705a40) (0x2ccaa80) Stream removed, broadcasting: 3\nI0826 15:23:44.941036    3191 log.go:172] (0x2705a40) (0x24b82a0) Stream removed, broadcasting: 5\n"
Aug 26 15:23:44.951: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:23:44.951: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:23:44.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:23:46.270: INFO: stderr: "I0826 15:23:46.174320    3214 log.go:172] (0x2c44000) (0x2c44070) Create stream\nI0826 15:23:46.180548    3214 log.go:172] (0x2c44000) (0x2c44070) Stream added, broadcasting: 1\nI0826 15:23:46.187867    3214 log.go:172] (0x2c44000) Reply frame received for 1\nI0826 15:23:46.188290    3214 log.go:172] (0x2c44000) (0x2b8e000) Create stream\nI0826 15:23:46.188359    3214 log.go:172] (0x2c44000) (0x2b8e000) Stream added, broadcasting: 3\nI0826 15:23:46.189688    3214 log.go:172] (0x2c44000) Reply frame received for 3\nI0826 15:23:46.189885    3214 log.go:172] (0x2c44000) (0x2c91ab0) Create stream\nI0826 15:23:46.189937    3214 log.go:172] (0x2c44000) (0x2c91ab0) Stream added, broadcasting: 5\nI0826 15:23:46.190835    3214 log.go:172] (0x2c44000) Reply frame received for 5\nI0826 15:23:46.251747    3214 log.go:172] (0x2c44000) Data frame received for 5\nI0826 15:23:46.251955    3214 log.go:172] (0x2c91ab0) (5) Data frame handling\nI0826 15:23:46.252162    3214 log.go:172] (0x2c44000) Data frame received for 3\nI0826 15:23:46.252276    3214 log.go:172] (0x2b8e000) (3) Data frame handling\nI0826 15:23:46.252342    3214 log.go:172] (0x2c44000) Data frame received for 1\nI0826 15:23:46.252416    3214 log.go:172] (0x2c44070) (1) Data frame handling\nI0826 15:23:46.252829    3214 log.go:172] (0x2b8e000) (3) Data frame sent\nI0826 15:23:46.253001    3214 log.go:172] (0x2c44070) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 15:23:46.253388    3214 log.go:172] (0x2c44000) Data frame received for 3\nI0826 15:23:46.253455    3214 log.go:172] (0x2c91ab0) (5) Data frame sent\nI0826 15:23:46.253549    3214 log.go:172] (0x2c44000) Data frame received for 5\nI0826 15:23:46.253610    3214 log.go:172] (0x2c91ab0) (5) Data frame handling\nI0826 15:23:46.253858    3214 log.go:172] (0x2b8e000) (3) Data frame handling\nI0826 15:23:46.254815    3214 log.go:172] (0x2c44000) (0x2c44070) Stream removed, broadcasting: 1\nI0826 15:23:46.256035    3214 log.go:172] (0x2c44000) Go away received\nI0826 15:23:46.260476    3214 log.go:172] (0x2c44000) (0x2c44070) Stream removed, broadcasting: 1\nI0826 15:23:46.260719    3214 log.go:172] (0x2c44000) (0x2b8e000) Stream removed, broadcasting: 3\nI0826 15:23:46.260913    3214 log.go:172] (0x2c44000) (0x2c91ab0) Stream removed, broadcasting: 5\n"
Aug 26 15:23:46.271: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 15:23:46.271: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 15:23:46.275: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:23:46.275: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 15:23:46.275: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 26 15:23:46.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:23:47.617: INFO: stderr: "I0826 15:23:47.527159    3237 log.go:172] (0x2b88000) (0x2b88070) Create stream\nI0826 15:23:47.528885    3237 log.go:172] (0x2b88000) (0x2b88070) Stream added, broadcasting: 1\nI0826 15:23:47.536992    3237 log.go:172] (0x2b88000) Reply frame received for 1\nI0826 15:23:47.537399    3237 log.go:172] (0x2b88000) (0x2c4f340) Create stream\nI0826 15:23:47.537452    3237 log.go:172] (0x2b88000) (0x2c4f340) Stream added, broadcasting: 3\nI0826 15:23:47.538418    3237 log.go:172] (0x2b88000) Reply frame received for 3\nI0826 15:23:47.538612    3237 log.go:172] (0x2b88000) (0x25c4540) Create stream\nI0826 15:23:47.538665    3237 log.go:172] (0x2b88000) (0x25c4540) Stream added, broadcasting: 5\nI0826 15:23:47.539616    3237 log.go:172] (0x2b88000) Reply frame received for 5\nI0826 15:23:47.601614    3237 log.go:172] (0x2b88000) Data frame received for 3\nI0826 15:23:47.601867    3237 log.go:172] (0x2b88000) Data frame received for 5\nI0826 15:23:47.602068    3237 log.go:172] (0x25c4540) (5) Data frame handling\nI0826 15:23:47.602348    3237 log.go:172] (0x2c4f340) (3) Data frame handling\nI0826 15:23:47.602455    3237 log.go:172] (0x2b88000) Data frame received for 1\nI0826 15:23:47.602525    3237 log.go:172] (0x2b88070) (1) Data frame handling\nI0826 15:23:47.602923    3237 log.go:172] (0x2b88070) (1) Data frame sent\nI0826 15:23:47.603108    3237 log.go:172] (0x2c4f340) (3) Data frame sent\nI0826 15:23:47.603210    3237 log.go:172] (0x25c4540) (5) Data frame sent\nI0826 15:23:47.603334    3237 log.go:172] (0x2b88000) Data frame received for 5\nI0826 15:23:47.603453    3237 log.go:172] (0x25c4540) (5) Data frame handling\nI0826 15:23:47.603639    3237 log.go:172] (0x2b88000) Data frame received for 3\nI0826 15:23:47.603708    3237 log.go:172] (0x2c4f340) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:23:47.606467    3237 log.go:172] (0x2b88000) (0x2b88070) Stream removed, broadcasting: 1\nI0826 15:23:47.607179    3237 log.go:172] (0x2b88000) Go away received\nI0826 15:23:47.609614    3237 log.go:172] (0x2b88000) (0x2b88070) Stream removed, broadcasting: 1\nI0826 15:23:47.609773    3237 log.go:172] (0x2b88000) (0x2c4f340) Stream removed, broadcasting: 3\nI0826 15:23:47.609908    3237 log.go:172] (0x2b88000) (0x25c4540) Stream removed, broadcasting: 5\n"
Aug 26 15:23:47.617: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:23:47.617: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:23:47.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:23:48.993: INFO: stderr: "I0826 15:23:48.862567    3258 log.go:172] (0x2809dc0) (0x2809e30) Create stream\nI0826 15:23:48.868334    3258 log.go:172] (0x2809dc0) (0x2809e30) Stream added, broadcasting: 1\nI0826 15:23:48.885316    3258 log.go:172] (0x2809dc0) Reply frame received for 1\nI0826 15:23:48.886186    3258 log.go:172] (0x2809dc0) (0x24bc8c0) Create stream\nI0826 15:23:48.886284    3258 log.go:172] (0x2809dc0) (0x24bc8c0) Stream added, broadcasting: 3\nI0826 15:23:48.888350    3258 log.go:172] (0x2809dc0) Reply frame received for 3\nI0826 15:23:48.888558    3258 log.go:172] (0x2809dc0) (0x24bd180) Create stream\nI0826 15:23:48.888612    3258 log.go:172] (0x2809dc0) (0x24bd180) Stream added, broadcasting: 5\nI0826 15:23:48.889741    3258 log.go:172] (0x2809dc0) Reply frame received for 5\nI0826 15:23:48.953094    3258 log.go:172] (0x2809dc0) Data frame received for 5\nI0826 15:23:48.953369    3258 log.go:172] (0x24bd180) (5) Data frame handling\nI0826 15:23:48.954023    3258 log.go:172] (0x24bd180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:23:48.979675    3258 log.go:172] (0x2809dc0) Data frame received for 3\nI0826 15:23:48.979824    3258 log.go:172] (0x24bc8c0) (3) Data frame handling\nI0826 15:23:48.979916    3258 log.go:172] (0x24bc8c0) (3) Data frame sent\nI0826 15:23:48.979991    3258 log.go:172] (0x2809dc0) Data frame received for 3\nI0826 15:23:48.980039    3258 log.go:172] (0x24bc8c0) (3) Data frame handling\nI0826 15:23:48.980255    3258 log.go:172] (0x2809dc0) Data frame received for 5\nI0826 15:23:48.980344    3258 log.go:172] (0x24bd180) (5) Data frame handling\nI0826 15:23:48.981527    3258 log.go:172] (0x2809dc0) Data frame received for 1\nI0826 15:23:48.981578    3258 log.go:172] (0x2809e30) (1) Data frame handling\nI0826 15:23:48.981641    3258 log.go:172] (0x2809e30) (1) Data frame sent\nI0826 15:23:48.982378    3258 log.go:172] (0x2809dc0) (0x2809e30) Stream removed, broadcasting: 1\nI0826 15:23:48.985397    3258 log.go:172] (0x2809dc0) (0x2809e30) Stream removed, broadcasting: 1\nI0826 15:23:48.985642    3258 log.go:172] (0x2809dc0) (0x24bc8c0) Stream removed, broadcasting: 3\nI0826 15:23:48.985826    3258 log.go:172] (0x2809dc0) (0x24bd180) Stream removed, broadcasting: 5\n"
Aug 26 15:23:48.994: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:23:48.994: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:23:48.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 15:23:51.028: INFO: stderr: "I0826 15:23:50.602399    3280 log.go:172] (0x28ce9a0) (0x28cea10) Create stream\nI0826 15:23:50.605649    3280 log.go:172] (0x28ce9a0) (0x28cea10) Stream added, broadcasting: 1\nI0826 15:23:50.613582    3280 log.go:172] (0x28ce9a0) Reply frame received for 1\nI0826 15:23:50.614062    3280 log.go:172] (0x28ce9a0) (0x274c070) Create stream\nI0826 15:23:50.614120    3280 log.go:172] (0x28ce9a0) (0x274c070) Stream added, broadcasting: 3\nI0826 15:23:50.615379    3280 log.go:172] (0x28ce9a0) Reply frame received for 3\nI0826 15:23:50.615582    3280 log.go:172] (0x28ce9a0) (0x2784070) Create stream\nI0826 15:23:50.615655    3280 log.go:172] (0x28ce9a0) (0x2784070) Stream added, broadcasting: 5\nI0826 15:23:50.616868    3280 log.go:172] (0x28ce9a0) Reply frame received for 5\nI0826 15:23:50.670004    3280 log.go:172] (0x28ce9a0) Data frame received for 5\nI0826 15:23:50.670260    3280 log.go:172] (0x2784070) (5) Data frame handling\nI0826 15:23:50.670741    3280 log.go:172] (0x2784070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 15:23:51.007667    3280 log.go:172] (0x28ce9a0) Data frame received for 3\nI0826 15:23:51.007882    3280 log.go:172] (0x274c070) (3) Data frame handling\nI0826 15:23:51.008037    3280 log.go:172] (0x28ce9a0) Data frame received for 5\nI0826 15:23:51.008265    3280 log.go:172] (0x2784070) (5) Data frame handling\nI0826 15:23:51.008482    3280 log.go:172] (0x28ce9a0) Data frame received for 1\nI0826 15:23:51.008693    3280 log.go:172] (0x28cea10) (1) Data frame handling\nI0826 15:23:51.008934    3280 log.go:172] (0x274c070) (3) Data frame sent\nI0826 15:23:51.009112    3280 log.go:172] (0x28ce9a0) Data frame received for 3\nI0826 15:23:51.009270    3280 log.go:172] (0x274c070) (3) Data frame handling\nI0826 15:23:51.009565    3280 log.go:172] (0x28cea10) (1) Data frame sent\nI0826 15:23:51.014707    3280 log.go:172] (0x28ce9a0) (0x28cea10) Stream removed, broadcasting: 1\nI0826 15:23:51.014965    3280 log.go:172] (0x28ce9a0) Go away received\nI0826 15:23:51.017801    3280 log.go:172] (0x28ce9a0) (0x28cea10) Stream removed, broadcasting: 1\nI0826 15:23:51.017993    3280 log.go:172] (0x28ce9a0) (0x274c070) Stream removed, broadcasting: 3\nI0826 15:23:51.018137    3280 log.go:172] (0x28ce9a0) (0x2784070) Stream removed, broadcasting: 5\n"
Aug 26 15:23:51.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 15:23:51.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 15:23:51.028: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:23:51.049: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 26 15:24:01.062: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:24:01.062: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:24:01.062: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 15:24:01.087: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:01.087: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:01.088: INFO: ss-1  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:01.088: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:01.089: INFO: 
Aug 26 15:24:01.089: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:02.122: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:02.122: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:02.122: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:02.123: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:02.123: INFO: 
Aug 26 15:24:02.123: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:03.338: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:03.338: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:03.339: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:03.339: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:03.339: INFO: 
Aug 26 15:24:03.339: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:04.562: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:04.563: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:04.563: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:04.563: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:04.564: INFO: 
Aug 26 15:24:04.564: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:05.820: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:05.820: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:05.821: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:05.821: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:05.821: INFO: 
Aug 26 15:24:05.821: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:07.161: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:07.162: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:07.162: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:07.163: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:07.163: INFO: 
Aug 26 15:24:07.163: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:08.341: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:08.342: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:08.342: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:08.343: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:08.343: INFO: 
Aug 26 15:24:08.343: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:09.399: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:09.399: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:09.400: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:09.400: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:09.401: INFO: 
Aug 26 15:24:09.401: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 15:24:10.504: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 15:24:10.504: INFO: ss-0  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:09 +0000 UTC  }]
Aug 26 15:24:10.505: INFO: ss-1  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:10.505: INFO: ss-2  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 15:23:30 +0000 UTC  }]
Aug 26 15:24:10.506: INFO: 
Aug 26 15:24:10.506: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5196
Aug 26 15:24:11.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:24:13.441: INFO: rc: 1
Aug 26 15:24:13.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:24:23.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:24:24.586: INFO: rc: 1
Aug 26 15:24:24.586: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:24:34.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:24:35.689: INFO: rc: 1
Aug 26 15:24:35.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:24:45.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:24:46.859: INFO: rc: 1
Aug 26 15:24:46.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:24:56.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:24:57.994: INFO: rc: 1
Aug 26 15:24:57.995: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:25:07.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:25:09.101: INFO: rc: 1
Aug 26 15:25:09.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:25:19.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:25:20.239: INFO: rc: 1
Aug 26 15:25:20.239: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:25:30.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:25:31.391: INFO: rc: 1
Aug 26 15:25:31.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:25:41.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:25:42.573: INFO: rc: 1
Aug 26 15:25:42.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:25:52.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:25:53.699: INFO: rc: 1
Aug 26 15:25:53.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:26:03.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:26:04.860: INFO: rc: 1
Aug 26 15:26:04.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:26:14.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:26:16.018: INFO: rc: 1
Aug 26 15:26:16.019: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:26:26.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:26:27.737: INFO: rc: 1
Aug 26 15:26:27.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:26:37.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:26:38.889: INFO: rc: 1
Aug 26 15:26:38.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:26:48.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:26:50.018: INFO: rc: 1
Aug 26 15:26:50.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:00.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:01.150: INFO: rc: 1
Aug 26 15:27:01.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:12.241: INFO: rc: 1
Aug 26 15:27:12.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:22.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:23.375: INFO: rc: 1
Aug 26 15:27:23.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:33.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:34.521: INFO: rc: 1
Aug 26 15:27:34.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:44.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:45.668: INFO: rc: 1
Aug 26 15:27:45.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:27:55.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:27:56.795: INFO: rc: 1
Aug 26 15:27:56.796: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:28:06.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:28:07.963: INFO: rc: 1
Aug 26 15:28:07.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:28:17.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:28:19.118: INFO: rc: 1
Aug 26 15:28:19.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:28:29.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:28:30.545: INFO: rc: 1
Aug 26 15:28:30.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:28:40.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:28:41.661: INFO: rc: 1
Aug 26 15:28:41.661: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:28:51.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:28:52.853: INFO: rc: 1
Aug 26 15:28:52.853: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:29:02.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:29:04.166: INFO: rc: 1
Aug 26 15:29:04.166: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 26 15:29:14.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5196 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 15:29:15.309: INFO: rc: 1
Aug 26 15:29:15.309: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Aug 26 15:29:15.310: INFO: Scaling statefulset ss to 0
Aug 26 15:29:15.322: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 15:29:15.325: INFO: Deleting all statefulset in ns statefulset-5196
Aug 26 15:29:15.329: INFO: Scaling statefulset ss to 0
Aug 26 15:29:15.340: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 15:29:15.344: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:29:15.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5196" for this suite.

• [SLOW TEST:367.323 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":193,"skipped":3178,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:29:15.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:29:15.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 15:29:34.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4570 create -f -'
Aug 26 15:29:40.969: INFO: stderr: ""
Aug 26 15:29:40.970: INFO: stdout: "e2e-test-crd-publish-openapi-7147-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 15:29:40.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4570 delete e2e-test-crd-publish-openapi-7147-crds test-cr'
Aug 26 15:29:42.095: INFO: stderr: ""
Aug 26 15:29:42.095: INFO: stdout: "e2e-test-crd-publish-openapi-7147-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 26 15:29:42.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4570 apply -f -'
Aug 26 15:29:43.659: INFO: stderr: ""
Aug 26 15:29:43.659: INFO: stdout: "e2e-test-crd-publish-openapi-7147-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 15:29:43.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4570 delete e2e-test-crd-publish-openapi-7147-crds test-cr'
Aug 26 15:29:44.828: INFO: stderr: ""
Aug 26 15:29:44.828: INFO: stdout: "e2e-test-crd-publish-openapi-7147-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 15:29:44.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7147-crds'
Aug 26 15:29:46.268: INFO: stderr: ""
Aug 26 15:29:46.268: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7147-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:29:56.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4570" for this suite.

• [SLOW TEST:40.637 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":194,"skipped":3179,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:29:56.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:29:56.238: INFO: Create a RollingUpdate DaemonSet
Aug 26 15:29:56.243: INFO: Check that daemon pods launch on every node of the cluster
Aug 26 15:29:56.260: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:29:56.286: INFO: Number of nodes with available pods: 0
Aug 26 15:29:56.286: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:29:57.295: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:29:57.301: INFO: Number of nodes with available pods: 0
Aug 26 15:29:57.301: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:29:58.464: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:29:58.471: INFO: Number of nodes with available pods: 0
Aug 26 15:29:58.471: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:29:59.294: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:29:59.300: INFO: Number of nodes with available pods: 0
Aug 26 15:29:59.300: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:30:00.364: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:00.541: INFO: Number of nodes with available pods: 0
Aug 26 15:30:00.541: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:30:01.295: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:01.300: INFO: Number of nodes with available pods: 1
Aug 26 15:30:01.300: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:30:02.293: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:02.297: INFO: Number of nodes with available pods: 2
Aug 26 15:30:02.297: INFO: Number of running nodes: 2, number of available pods: 2
Aug 26 15:30:02.297: INFO: Update the DaemonSet to trigger a rollout
Aug 26 15:30:02.304: INFO: Updating DaemonSet daemon-set
Aug 26 15:30:12.356: INFO: Roll back the DaemonSet before rollout is complete
Aug 26 15:30:12.364: INFO: Updating DaemonSet daemon-set
Aug 26 15:30:12.365: INFO: Make sure DaemonSet rollback is complete
Aug 26 15:30:12.376: INFO: Wrong image for pod: daemon-set-hkh82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 15:30:12.376: INFO: Pod daemon-set-hkh82 is not available
Aug 26 15:30:12.474: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:13.482: INFO: Wrong image for pod: daemon-set-hkh82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 15:30:13.482: INFO: Pod daemon-set-hkh82 is not available
Aug 26 15:30:13.488: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:14.481: INFO: Wrong image for pod: daemon-set-hkh82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 15:30:14.482: INFO: Pod daemon-set-hkh82 is not available
Aug 26 15:30:14.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:30:15.482: INFO: Pod daemon-set-9xvf4 is not available
Aug 26 15:30:15.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6095, will wait for the garbage collector to delete the pods
Aug 26 15:30:15.562: INFO: Deleting DaemonSet.extensions daemon-set took: 6.710052ms
Aug 26 15:30:15.962: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.707345ms
Aug 26 15:30:21.935: INFO: Number of nodes with available pods: 0
Aug 26 15:30:21.935: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 15:30:21.947: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6095/daemonsets","resourceVersion":"3915986"},"items":null}

Aug 26 15:30:21.951: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6095/pods","resourceVersion":"3915986"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:30:22.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6095" for this suite.

• [SLOW TEST:27.356 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":195,"skipped":3191,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:30:23.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:30:24.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b" in namespace "downward-api-8248" to be "success or failure"
Aug 26 15:30:24.875: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b": Phase="Pending", Reason="", readiness=false. Elapsed: 279.469116ms
Aug 26 15:30:26.880: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284052059s
Aug 26 15:30:28.989: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393003856s
Aug 26 15:30:30.999: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403111345s
Aug 26 15:30:33.006: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.410239156s
STEP: Saw pod success
Aug 26 15:30:33.006: INFO: Pod "downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b" satisfied condition "success or failure"
Aug 26 15:30:33.011: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b container client-container: 
STEP: delete the pod
Aug 26 15:30:33.044: INFO: Waiting for pod downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b to disappear
Aug 26 15:30:33.048: INFO: Pod downwardapi-volume-58d98df2-41a6-4cf7-89a2-4b2091da894b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:30:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8248" for this suite.

• [SLOW TEST:9.637 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3195,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:30:33.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0826 15:31:14.832008       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 15:31:14.832: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:31:14.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3611" for this suite.

• [SLOW TEST:41.782 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":197,"skipped":3209,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:31:14.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-5294102b-3e76-41e9-8717-2ae3d340310e
STEP: Creating secret with name s-test-opt-upd-51d83b7c-a291-41e6-80c1-ac45b10a7ddb
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5294102b-3e76-41e9-8717-2ae3d340310e
STEP: Updating secret s-test-opt-upd-51d83b7c-a291-41e6-80c1-ac45b10a7ddb
STEP: Creating secret with name s-test-opt-create-67ca4ee3-10ac-41bc-939d-a5f07d87a863
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:33:04.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9166" for this suite.

• [SLOW TEST:109.396 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3240,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:33:04.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:33:13.325: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:33:18.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052792, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:33:20.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052792, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:33:22.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052793, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052792, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:33:25.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:33:26.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6729" for this suite.
STEP: Destroying namespace "webhook-6729-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:23.057 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":199,"skipped":3245,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:33:27.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:33:33.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 15:33:35.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:33:37.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734052813, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:33:41.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:33:41.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4560" for this suite.
STEP: Destroying namespace "webhook-4560-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.244 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":200,"skipped":3247,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:33:41.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 15:33:42.690: INFO: Waiting up to 5m0s for pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56" in namespace "emptydir-499" to be "success or failure"
Aug 26 15:33:42.709: INFO: Pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56": Phase="Pending", Reason="", readiness=false. Elapsed: 19.28613ms
Aug 26 15:33:44.716: INFO: Pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026036212s
Aug 26 15:33:46.944: INFO: Pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253974505s
Aug 26 15:33:48.967: INFO: Pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276475406s
STEP: Saw pod success
Aug 26 15:33:48.967: INFO: Pod "pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56" satisfied condition "success or failure"
Aug 26 15:33:48.971: INFO: Trying to get logs from node jerma-worker pod pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56 container test-container: 
STEP: delete the pod
Aug 26 15:33:49.017: INFO: Waiting for pod pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56 to disappear
Aug 26 15:33:49.027: INFO: Pod pod-e3871bf1-0dc1-4f30-b3b6-3e18aa2cae56 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:33:49.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-499" for this suite.

• [SLOW TEST:7.489 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3259,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:33:49.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-42da5d35-8f71-4300-8ea1-1828cac1d40c
STEP: Creating a pod to test consume configMaps
Aug 26 15:33:49.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252" in namespace "projected-9349" to be "success or failure"
Aug 26 15:33:49.454: INFO: Pod "pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252": Phase="Pending", Reason="", readiness=false. Elapsed: 81.485407ms
Aug 26 15:33:51.478: INFO: Pod "pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105551923s
Aug 26 15:33:53.484: INFO: Pod "pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111864983s
STEP: Saw pod success
Aug 26 15:33:53.484: INFO: Pod "pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252" satisfied condition "success or failure"
Aug 26 15:33:53.495: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 15:33:53.534: INFO: Waiting for pod pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252 to disappear
Aug 26 15:33:53.567: INFO: Pod pod-projected-configmaps-64805cff-93ae-4827-9f46-c8f80c98d252 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:33:53.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9349" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3269,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:33:53.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-6046/secret-test-a4f9c6ed-e04b-440e-85c0-d1649716e663
STEP: Creating a pod to test consume secrets
Aug 26 15:33:53.808: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3" in namespace "secrets-6046" to be "success or failure"
Aug 26 15:33:53.999: INFO: Pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3": Phase="Pending", Reason="", readiness=false. Elapsed: 190.949187ms
Aug 26 15:33:56.011: INFO: Pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203049561s
Aug 26 15:33:58.088: INFO: Pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280093231s
Aug 26 15:34:00.093: INFO: Pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285698252s
STEP: Saw pod success
Aug 26 15:34:00.094: INFO: Pod "pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3" satisfied condition "success or failure"
Aug 26 15:34:00.097: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3 container env-test: 
STEP: delete the pod
Aug 26 15:34:00.192: INFO: Waiting for pod pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3 to disappear
Aug 26 15:34:00.249: INFO: Pod pod-configmaps-6f08a89e-10b5-4bfd-81fc-a241ba4211b3 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:34:00.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6046" for this suite.

• [SLOW TEST:6.667 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3273,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:34:00.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:34:00.435: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 26 15:34:05.447: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 15:34:05.448: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 15:34:22.547: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-872 /apis/apps/v1/namespaces/deployment-872/deployments/test-cleanup-deployment c83d6107-8ac7-4e5f-b72c-bcdd7d0e68c0 3917133 1 2020-08-26 15:34:05 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9ac93a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 15:34:05 +0000 UTC,LastTransitionTime:2020-08-26 15:34:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-26 15:34:19 +0000 UTC,LastTransitionTime:2020-08-26 15:34:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 15:34:22.901: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-872 /apis/apps/v1/namespaces/deployment-872/replicasets/test-cleanup-deployment-55ffc6b7b6 a3101c4c-25f8-4035-a3a9-4b360aeaf2f9 3917119 1 2020-08-26 15:34:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c83d6107-8ac7-4e5f-b72c-bcdd7d0e68c0 0x9b68ed7 0x9b68ed8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9b68f48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:34:23.258: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-5rglt" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-5rglt test-cleanup-deployment-55ffc6b7b6- deployment-872 /api/v1/namespaces/deployment-872/pods/test-cleanup-deployment-55ffc6b7b6-5rglt bc352f76-113a-468a-bffc-1d4d2da252ca 3917118 0 2020-08-26 15:34:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 a3101c4c-25f8-4035-a3a9-4b360aeaf2f9 0x9b09e57 0x9b09e58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bf89k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bf89k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bf89k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:34:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.19,StartTime:2020-08-26 15:34:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:34:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://e361fb02f6e6faff2c98914b0660a386177f910c8c02f8a066ae65ed869bdb16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:34:23.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-872" for this suite.

• [SLOW TEST:22.997 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":204,"skipped":3281,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:34:23.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 15:34:25.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:25.601: INFO: Number of nodes with available pods: 0
Aug 26 15:34:25.602: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:26.612: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:26.619: INFO: Number of nodes with available pods: 0
Aug 26 15:34:26.619: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:27.661: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:27.869: INFO: Number of nodes with available pods: 0
Aug 26 15:34:27.869: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:28.866: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:29.030: INFO: Number of nodes with available pods: 0
Aug 26 15:34:29.031: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:29.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:29.977: INFO: Number of nodes with available pods: 1
Aug 26 15:34:29.978: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:30.612: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:30.619: INFO: Number of nodes with available pods: 1
Aug 26 15:34:30.619: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 15:34:31.637: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:31.642: INFO: Number of nodes with available pods: 2
Aug 26 15:34:31.642: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 26 15:34:31.670: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:31.674: INFO: Number of nodes with available pods: 1
Aug 26 15:34:31.675: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:32.684: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:32.689: INFO: Number of nodes with available pods: 1
Aug 26 15:34:32.690: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:33.683: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:33.689: INFO: Number of nodes with available pods: 1
Aug 26 15:34:33.689: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:34.684: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:34.690: INFO: Number of nodes with available pods: 1
Aug 26 15:34:34.690: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:35.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:35.713: INFO: Number of nodes with available pods: 1
Aug 26 15:34:35.713: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:36.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:36.692: INFO: Number of nodes with available pods: 1
Aug 26 15:34:36.692: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:37.739: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:37.820: INFO: Number of nodes with available pods: 1
Aug 26 15:34:37.820: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:38.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:38.741: INFO: Number of nodes with available pods: 1
Aug 26 15:34:38.742: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:39.683: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:39.688: INFO: Number of nodes with available pods: 1
Aug 26 15:34:39.688: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:40.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:40.737: INFO: Number of nodes with available pods: 1
Aug 26 15:34:40.738: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:41.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:41.923: INFO: Number of nodes with available pods: 1
Aug 26 15:34:41.923: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:42.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:42.692: INFO: Number of nodes with available pods: 1
Aug 26 15:34:42.692: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:43.684: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:43.688: INFO: Number of nodes with available pods: 1
Aug 26 15:34:43.689: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:44.682: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:44.687: INFO: Number of nodes with available pods: 1
Aug 26 15:34:44.687: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 15:34:45.684: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 15:34:45.690: INFO: Number of nodes with available pods: 2
Aug 26 15:34:45.690: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8604, will wait for the garbage collector to delete the pods
Aug 26 15:34:45.755: INFO: Deleting DaemonSet.extensions daemon-set took: 7.564866ms
Aug 26 15:34:45.856: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.929552ms
Aug 26 15:34:49.963: INFO: Number of nodes with available pods: 0
Aug 26 15:34:49.963: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 15:34:49.967: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8604/daemonsets","resourceVersion":"3917296"},"items":null}

Aug 26 15:34:49.970: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8604/pods","resourceVersion":"3917296"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:34:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8604" for this suite.

• [SLOW TEST:26.752 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":205,"skipped":3293,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:34:50.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-f0475c28-5bdf-4755-a262-0ef159198d8b
STEP: Creating a pod to test consume secrets
Aug 26 15:34:50.098: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10" in namespace "projected-2156" to be "success or failure"
Aug 26 15:34:50.149: INFO: Pod "pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 50.720628ms
Aug 26 15:34:52.342: INFO: Pod "pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243493263s
Aug 26 15:34:54.607: INFO: Pod "pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.508700767s
STEP: Saw pod success
Aug 26 15:34:54.607: INFO: Pod "pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10" satisfied condition "success or failure"
Aug 26 15:34:55.089: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 15:34:56.181: INFO: Waiting for pod pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10 to disappear
Aug 26 15:34:56.375: INFO: Pod pod-projected-secrets-02423e15-650e-498d-ac5a-9ee47bc3ce10 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:34:56.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2156" for this suite.

• [SLOW TEST:6.361 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3305,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:34:56.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-86fcace7-dccd-4f5f-9872-e7cbe2d592af in namespace container-probe-7871
Aug 26 15:35:04.664: INFO: Started pod test-webserver-86fcace7-dccd-4f5f-9872-e7cbe2d592af in namespace container-probe-7871
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 15:35:05.330: INFO: Initial restart count of pod test-webserver-86fcace7-dccd-4f5f-9872-e7cbe2d592af is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:39:06.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7871" for this suite.

• [SLOW TEST:250.311 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3318,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:39:06.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9ed93222-ee95-41c0-9033-cb34aa7b4ffd
STEP: Creating configMap with name cm-test-opt-upd-65ee478a-5924-4ad6-8661-2b24890b321b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9ed93222-ee95-41c0-9033-cb34aa7b4ffd
STEP: Updating configmap cm-test-opt-upd-65ee478a-5924-4ad6-8661-2b24890b321b
STEP: Creating configMap with name cm-test-opt-create-11079057-2327-418f-93f8-c590f8b063cf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:40:52.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6616" for this suite.

• [SLOW TEST:105.989 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3322,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:40:52.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:40:52.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f" in namespace "projected-3753" to be "success or failure"
Aug 26 15:40:52.821: INFO: Pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.597247ms
Aug 26 15:40:54.916: INFO: Pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117274839s
Aug 26 15:40:56.924: INFO: Pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124648011s
Aug 26 15:40:59.350: INFO: Pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.551329918s
STEP: Saw pod success
Aug 26 15:40:59.351: INFO: Pod "downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f" satisfied condition "success or failure"
Aug 26 15:40:59.446: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f container client-container: 
STEP: delete the pod
Aug 26 15:41:00.212: INFO: Waiting for pod downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f to disappear
Aug 26 15:41:00.716: INFO: Pod downwardapi-volume-47e6acf3-8f4a-4773-a3f3-cc6f6475029f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:41:00.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3753" for this suite.

• [SLOW TEST:8.108 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3336,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:41:00.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:41:01.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 26 15:41:20.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -'
Aug 26 15:41:26.449: INFO: stderr: ""
Aug 26 15:41:26.449: INFO: stdout: "e2e-test-crd-publish-openapi-9437-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 15:41:26.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 delete e2e-test-crd-publish-openapi-9437-crds test-foo'
Aug 26 15:41:27.644: INFO: stderr: ""
Aug 26 15:41:27.644: INFO: stdout: "e2e-test-crd-publish-openapi-9437-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 26 15:41:27.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 apply -f -'
Aug 26 15:41:31.248: INFO: stderr: ""
Aug 26 15:41:31.249: INFO: stdout: "e2e-test-crd-publish-openapi-9437-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 15:41:31.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 delete e2e-test-crd-publish-openapi-9437-crds test-foo'
Aug 26 15:41:32.394: INFO: stderr: ""
Aug 26 15:41:32.394: INFO: stdout: "e2e-test-crd-publish-openapi-9437-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 26 15:41:32.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -'
Aug 26 15:41:33.891: INFO: rc: 1
Aug 26 15:41:33.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 apply -f -'
Aug 26 15:41:35.583: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 26 15:41:35.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -'
Aug 26 15:41:37.260: INFO: rc: 1
Aug 26 15:41:37.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 apply -f -'
Aug 26 15:41:39.865: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 26 15:41:39.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9437-crds'
Aug 26 15:41:42.384: INFO: stderr: ""
Aug 26 15:41:42.384: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9437-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 26 15:41:42.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9437-crds.metadata'
Aug 26 15:41:44.053: INFO: stderr: ""
Aug 26 15:41:44.053: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9437-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 26 15:41:44.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9437-crds.spec'
Aug 26 15:41:45.790: INFO: stderr: ""
Aug 26 15:41:45.791: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9437-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 26 15:41:45.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9437-crds.spec.bars'
Aug 26 15:41:47.303: INFO: stderr: ""
Aug 26 15:41:47.303: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9437-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 26 15:41:47.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9437-crds.spec.bars2'
Aug 26 15:41:49.359: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:42:08.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9811" for this suite.

• [SLOW TEST:67.492 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":210,"skipped":3366,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:42:08.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-046a59ab-2b07-4671-96c4-75cb0766ae31 in namespace container-probe-3418
Aug 26 15:42:17.763: INFO: Started pod liveness-046a59ab-2b07-4671-96c4-75cb0766ae31 in namespace container-probe-3418
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 15:42:17.769: INFO: Initial restart count of pod liveness-046a59ab-2b07-4671-96c4-75cb0766ae31 is 0
Aug 26 15:42:39.060: INFO: Restart count of pod container-probe-3418/liveness-046a59ab-2b07-4671-96c4-75cb0766ae31 is now 1 (21.290200446s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:42:39.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3418" for this suite.

• [SLOW TEST:31.177 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3382,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:42:39.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:42:39.939: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 26 15:42:40.071: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 26 15:42:45.079: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 15:42:45.080: INFO: Creating deployment "test-rolling-update-deployment"
Aug 26 15:42:45.086: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 26 15:42:45.536: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set
Aug 26 15:42:47.694: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 26 15:42:48.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053366, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:42:50.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053366, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053365, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:42:52.055: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 15:42:52.072: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-9941 /apis/apps/v1/namespaces/deployment-9941/deployments/test-rolling-update-deployment 5f959130-ed4d-4438-8a52-e1e86587f486 3918848 1 2020-08-26 15:42:45 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x9751d28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 15:42:45 +0000 UTC,LastTransitionTime:2020-08-26 15:42:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-26 15:42:51 +0000 UTC,LastTransitionTime:2020-08-26 15:42:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 15:42:52.079: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-9941 /apis/apps/v1/namespaces/deployment-9941/replicasets/test-rolling-update-deployment-67cf4f6444 e61df542-615c-4fdc-8057-774ba29f012d 3918837 1 2020-08-26 15:42:45 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 5f959130-ed4d-4438-8a52-e1e86587f486 0x97b8737 0x97b8738}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x97b87a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:42:52.079: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 26 15:42:52.080: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-9941 /apis/apps/v1/namespaces/deployment-9941/replicasets/test-rolling-update-controller 3449118d-5d3a-478b-a057-a98b387ae26e 3918846 2 2020-08-26 15:42:39 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 5f959130-ed4d-4438-8a52-e1e86587f486 0x97b8667 0x97b8668}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x97b86c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:42:52.087: INFO: Pod "test-rolling-update-deployment-67cf4f6444-nlmhk" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-nlmhk test-rolling-update-deployment-67cf4f6444- deployment-9941 /api/v1/namespaces/deployment-9941/pods/test-rolling-update-deployment-67cf4f6444-nlmhk dcf0b8af-e7ad-46d9-a2eb-d65c8cd0bdd9 3918836 0 2020-08-26 15:42:45 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e61df542-615c-4fdc-8057-774ba29f012d 0x97b8c37 0x97b8c38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7pxkc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7pxkc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7pxkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:42:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:42:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:42:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:42:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.180,StartTime:2020-08-26 15:42:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 15:42:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://606681b10f0bf7bdbb68e38953602e882265600b7d2c462f2a1d9b46ff87ff72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:42:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9941" for this suite.

• [SLOW TEST:12.622 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":212,"skipped":3407,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:42:52.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ef214b90-fa95-4931-b3f5-147a1610411e
STEP: Creating a pod to test consume secrets
Aug 26 15:42:52.536: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202" in namespace "projected-1357" to be "success or failure"
Aug 26 15:42:52.560: INFO: Pod "pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202": Phase="Pending", Reason="", readiness=false. Elapsed: 23.465284ms
Aug 26 15:42:54.641: INFO: Pod "pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105265683s
Aug 26 15:42:56.647: INFO: Pod "pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111242965s
STEP: Saw pod success
Aug 26 15:42:56.648: INFO: Pod "pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202" satisfied condition "success or failure"
Aug 26 15:42:56.652: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 15:42:56.689: INFO: Waiting for pod pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202 to disappear
Aug 26 15:42:56.693: INFO: Pod pod-projected-secrets-b83145b5-1703-4342-aba6-ca9e9757c202 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:42:56.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1357" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3409,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:42:56.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 26 15:43:02.911: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 26 15:43:04.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:43:06.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053382, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 15:43:10.288: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:43:10.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:43:11.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4086" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.944 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":214,"skipped":3409,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:43:11.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0826 15:43:21.969097       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 15:43:21.969: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:43:21.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8490" for this suite.

• [SLOW TEST:10.327 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":215,"skipped":3419,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:43:21.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:43:22.731: INFO: Creating deployment "test-recreate-deployment"
Aug 26 15:43:22.756: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 26 15:43:22.978: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 26 15:43:24.991: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 26 15:43:24.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053403, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:43:27.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053403, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734053402, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 15:43:29.001: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 26 15:43:29.011: INFO: Updating deployment test-recreate-deployment
Aug 26 15:43:29.012: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 15:43:33.584: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-7805 /apis/apps/v1/namespaces/deployment-7805/deployments/test-recreate-deployment 76c6b7cb-f70c-4014-ab7a-fd128df4f62a 3919158 2 2020-08-26 15:43:22 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x992eb98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 15:43:32 +0000 UTC,LastTransitionTime:2020-08-26 15:43:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-26 15:43:33 +0000 UTC,LastTransitionTime:2020-08-26 15:43:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 26 15:43:33.651: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-7805 /apis/apps/v1/namespaces/deployment-7805/replicasets/test-recreate-deployment-5f94c574ff 636b70d8-c129-4b20-8418-4931910a361c 3919154 1 2020-08-26 15:43:31 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 76c6b7cb-f70c-4014-ab7a-fd128df4f62a 0x990be07 0x990be08}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x990be98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:43:33.652: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 26 15:43:33.652: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-7805 /apis/apps/v1/namespaces/deployment-7805/replicasets/test-recreate-deployment-799c574856 aff7227e-1a3b-4aa3-98b8-0ced12ac817b 3919143 2 2020-08-26 15:43:22 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 76c6b7cb-f70c-4014-ab7a-fd128df4f62a 0x990bf57 0x990bf58}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x990bff8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 15:43:33.795: INFO: Pod "test-recreate-deployment-5f94c574ff-42clg" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-42clg test-recreate-deployment-5f94c574ff- deployment-7805 /api/v1/namespaces/deployment-7805/pods/test-recreate-deployment-5f94c574ff-42clg a13edf38-80ee-4078-8994-4dd184c889aa 3919159 0 2020-08-26 15:43:32 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 636b70d8-c129-4b20-8418-4931910a361c 0x992f137 0x992f138}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dmbnp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dmbnp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dmbnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:43:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:43:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:43:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 15:43:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 15:43:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:43:33.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7805" for this suite.

• [SLOW TEST:11.822 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":216,"skipped":3468,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:43:33.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2586.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2586.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2586.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2586.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2586.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 15:43:56.567: INFO: DNS probes using dns-2586/dns-test-0f60573f-46c2-44bf-8aa0-d0dc25a1bc1b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:43:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2586" for this suite.

• [SLOW TEST:24.730 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":217,"skipped":3478,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:43:58.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 15:44:11.518: INFO: Successfully updated pod "pod-update-c13e57fc-17de-40a9-8615-508e3e0f8c65"
STEP: verifying the updated pod is in kubernetes
Aug 26 15:44:11.706: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:44:11.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1023" for this suite.

• [SLOW TEST:13.176 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3489,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:44:11.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:44:13.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 26 15:44:15.228: INFO: stderr: ""
Aug 26 15:44:15.228: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:44:15.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4512" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":219,"skipped":3508,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:44:15.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 15:44:27.282: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:44:27.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6079" for this suite.

• [SLOW TEST:16.742 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3533,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:44:32.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-z7bt
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 15:44:34.952: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z7bt" in namespace "subpath-1915" to be "success or failure"
Aug 26 15:44:35.208: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Pending", Reason="", readiness=false. Elapsed: 255.265093ms
Aug 26 15:44:37.284: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332030452s
Aug 26 15:44:40.066: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Pending", Reason="", readiness=false. Elapsed: 5.113523023s
Aug 26 15:44:42.072: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.119188955s
Aug 26 15:44:44.136: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183359493s
Aug 26 15:44:46.837: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 11.885031561s
Aug 26 15:44:48.844: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 13.891145808s
Aug 26 15:44:50.850: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 15.897909301s
Aug 26 15:44:53.250: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 18.297798721s
Aug 26 15:44:55.267: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 20.314575028s
Aug 26 15:44:57.274: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 22.321447326s
Aug 26 15:44:59.280: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 24.327155341s
Aug 26 15:45:01.285: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 26.332716738s
Aug 26 15:45:03.579: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Running", Reason="", readiness=true. Elapsed: 28.62621493s
Aug 26 15:45:06.071: INFO: Pod "pod-subpath-test-configmap-z7bt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.118364579s
STEP: Saw pod success
Aug 26 15:45:06.071: INFO: Pod "pod-subpath-test-configmap-z7bt" satisfied condition "success or failure"
Aug 26 15:45:06.358: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-z7bt container test-container-subpath-configmap-z7bt: 
STEP: delete the pod
Aug 26 15:45:07.042: INFO: Waiting for pod pod-subpath-test-configmap-z7bt to disappear
Aug 26 15:45:07.304: INFO: Pod pod-subpath-test-configmap-z7bt no longer exists
STEP: Deleting pod pod-subpath-test-configmap-z7bt
Aug 26 15:45:07.304: INFO: Deleting pod "pod-subpath-test-configmap-z7bt" in namespace "subpath-1915"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:45:07.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1915" for this suite.

• [SLOW TEST:35.145 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":221,"skipped":3534,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:45:07.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 26 15:45:08.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5098'
Aug 26 15:45:11.243: INFO: stderr: ""
Aug 26 15:45:11.243: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 15:45:11.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5098'
Aug 26 15:45:12.581: INFO: stderr: ""
Aug 26 15:45:12.582: INFO: stdout: "update-demo-nautilus-586v2 update-demo-nautilus-zc22f "
Aug 26 15:45:12.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-586v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:45:14.443: INFO: stderr: ""
Aug 26 15:45:14.443: INFO: stdout: ""
Aug 26 15:45:14.443: INFO: update-demo-nautilus-586v2 is created but not running
Aug 26 15:45:19.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5098'
Aug 26 15:45:20.570: INFO: stderr: ""
Aug 26 15:45:20.570: INFO: stdout: "update-demo-nautilus-586v2 update-demo-nautilus-zc22f "
Aug 26 15:45:20.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-586v2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:45:21.766: INFO: stderr: ""
Aug 26 15:45:21.766: INFO: stdout: "true"
Aug 26 15:45:21.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-586v2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:45:22.926: INFO: stderr: ""
Aug 26 15:45:22.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 15:45:22.927: INFO: validating pod update-demo-nautilus-586v2
Aug 26 15:45:22.933: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 15:45:22.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 15:45:22.934: INFO: update-demo-nautilus-586v2 is verified up and running
Aug 26 15:45:22.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc22f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:45:24.149: INFO: stderr: ""
Aug 26 15:45:24.150: INFO: stdout: "true"
Aug 26 15:45:24.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc22f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:45:25.299: INFO: stderr: ""
Aug 26 15:45:25.300: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 15:45:25.300: INFO: validating pod update-demo-nautilus-zc22f
Aug 26 15:45:25.304: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 15:45:25.304: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 15:45:25.305: INFO: update-demo-nautilus-zc22f is verified up and running
STEP: rolling-update to new replication controller
Aug 26 15:45:25.312: INFO: scanned /root for discovery docs: 
Aug 26 15:45:25.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5098'
Aug 26 15:45:59.830: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 26 15:45:59.830: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 15:45:59.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5098'
Aug 26 15:46:01.006: INFO: stderr: ""
Aug 26 15:46:01.006: INFO: stdout: "update-demo-kitten-psnx9 update-demo-kitten-zbb6z "
Aug 26 15:46:01.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psnx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:46:02.156: INFO: stderr: ""
Aug 26 15:46:02.156: INFO: stdout: "true"
Aug 26 15:46:02.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psnx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:46:03.305: INFO: stderr: ""
Aug 26 15:46:03.305: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 26 15:46:03.305: INFO: validating pod update-demo-kitten-psnx9
Aug 26 15:46:03.311: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 26 15:46:03.311: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 26 15:46:03.311: INFO: update-demo-kitten-psnx9 is verified up and running
Aug 26 15:46:03.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zbb6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:46:04.478: INFO: stderr: ""
Aug 26 15:46:04.478: INFO: stdout: "true"
Aug 26 15:46:04.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zbb6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5098'
Aug 26 15:46:05.616: INFO: stderr: ""
Aug 26 15:46:05.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 26 15:46:05.616: INFO: validating pod update-demo-kitten-zbb6z
Aug 26 15:46:05.655: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 26 15:46:05.655: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 26 15:46:05.655: INFO: update-demo-kitten-zbb6z is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:46:05.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5098" for this suite.

• [SLOW TEST:57.904 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":222,"skipped":3549,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:46:05.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 15:46:06.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8778'
Aug 26 15:46:08.067: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 15:46:08.067: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Aug 26 15:46:08.392: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-ctcdf]
Aug 26 15:46:08.393: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-ctcdf" in namespace "kubectl-8778" to be "running and ready"
Aug 26 15:46:08.748: INFO: Pod "e2e-test-httpd-rc-ctcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 354.567843ms
Aug 26 15:46:11.150: INFO: Pod "e2e-test-httpd-rc-ctcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756343605s
Aug 26 15:46:13.317: INFO: Pod "e2e-test-httpd-rc-ctcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.923981145s
Aug 26 15:46:15.629: INFO: Pod "e2e-test-httpd-rc-ctcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.235701061s
Aug 26 15:46:17.739: INFO: Pod "e2e-test-httpd-rc-ctcdf": Phase="Running", Reason="", readiness=true. Elapsed: 9.346124564s
Aug 26 15:46:17.740: INFO: Pod "e2e-test-httpd-rc-ctcdf" satisfied condition "running and ready"
Aug 26 15:46:17.740: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-ctcdf]
Aug 26 15:46:17.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8778'
Aug 26 15:46:21.759: INFO: stderr: ""
Aug 26 15:46:21.759: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.27. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.27. Set the 'ServerName' directive globally to suppress this message\n[Wed Aug 26 15:46:15.274493 2020] [mpm_event:notice] [pid 1:tid 140238441630568] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Aug 26 15:46:15.274578 2020] [core:notice] [pid 1:tid 140238441630568] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Aug 26 15:46:21.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8778'
Aug 26 15:46:23.192: INFO: stderr: ""
Aug 26 15:46:23.192: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:46:23.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8778" for this suite.

• [SLOW TEST:19.250 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":223,"skipped":3567,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:46:24.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 26 15:46:29.241: INFO: Waiting up to 5m0s for pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000" in namespace "var-expansion-8734" to be "success or failure"
Aug 26 15:46:30.063: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Pending", Reason="", readiness=false. Elapsed: 821.884708ms
Aug 26 15:46:32.103: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.861864963s
Aug 26 15:46:34.508: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Pending", Reason="", readiness=false. Elapsed: 5.266770204s
Aug 26 15:46:36.624: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Pending", Reason="", readiness=false. Elapsed: 7.382561304s
Aug 26 15:46:38.858: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Pending", Reason="", readiness=false. Elapsed: 9.616289441s
Aug 26 15:46:40.863: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.621471931s
STEP: Saw pod success
Aug 26 15:46:40.863: INFO: Pod "var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000" satisfied condition "success or failure"
Aug 26 15:46:40.867: INFO: Trying to get logs from node jerma-worker pod var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000 container dapi-container: 
STEP: delete the pod
Aug 26 15:46:41.013: INFO: Waiting for pod var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000 to disappear
Aug 26 15:46:41.043: INFO: Pod var-expansion-6303f4ad-2bee-47e2-8970-e8f631ddb000 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:46:41.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8734" for this suite.

• [SLOW TEST:16.134 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3569,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:46:41.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:46:41.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c" in namespace "projected-3492" to be "success or failure"
Aug 26 15:46:42.157: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c": Phase="Pending", Reason="", readiness=false. Elapsed: 726.281935ms
Aug 26 15:46:44.377: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946844973s
Aug 26 15:46:46.564: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.134235691s
Aug 26 15:46:49.121: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c": Phase="Running", Reason="", readiness=true. Elapsed: 7.691180886s
Aug 26 15:46:51.461: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.031124597s
STEP: Saw pod success
Aug 26 15:46:51.462: INFO: Pod "downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c" satisfied condition "success or failure"
Aug 26 15:46:51.794: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c container client-container: 
STEP: delete the pod
Aug 26 15:46:53.146: INFO: Waiting for pod downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c to disappear
Aug 26 15:46:53.514: INFO: Pod downwardapi-volume-a2fc622d-679a-45b6-91d8-f755cc24c58c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:46:53.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3492" for this suite.

• [SLOW TEST:12.965 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:46:54.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 26 15:47:08.794: INFO: Pod pod-hostip-a02a87c3-775f-44e7-8dee-a467a3c46d40 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:47:08.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9785" for this suite.

• [SLOW TEST:15.946 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3632,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:47:09.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-e39e9984-6882-478e-8179-3a3245ad3601
STEP: Creating a pod to test consume configMaps
Aug 26 15:47:11.058: INFO: Waiting up to 5m0s for pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d" in namespace "configmap-4658" to be "success or failure"
Aug 26 15:47:11.815: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Pending", Reason="", readiness=false. Elapsed: 756.397714ms
Aug 26 15:47:13.821: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.762207043s
Aug 26 15:47:16.324: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.265558173s
Aug 26 15:47:18.775: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.716991307s
Aug 26 15:47:20.846: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.787488127s
Aug 26 15:47:23.216: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.157497215s
STEP: Saw pod success
Aug 26 15:47:23.216: INFO: Pod "pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d" satisfied condition "success or failure"
Aug 26 15:47:23.277: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d container configmap-volume-test: 
STEP: delete the pod
Aug 26 15:47:24.742: INFO: Waiting for pod pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d to disappear
Aug 26 15:47:25.026: INFO: Pod pod-configmaps-31e721ad-4475-490c-8019-cf7ef896330d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:47:25.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4658" for this suite.

• [SLOW TEST:15.521 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3634,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:47:25.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-47ec5a12-ead2-4c49-9795-8391ba19fcf5
STEP: Creating a pod to test consume secrets
Aug 26 15:47:28.463: INFO: Waiting up to 5m0s for pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931" in namespace "secrets-3438" to be "success or failure"
Aug 26 15:47:29.139: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Pending", Reason="", readiness=false. Elapsed: 675.4728ms
Aug 26 15:47:31.564: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100557762s
Aug 26 15:47:33.598: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Pending", Reason="", readiness=false. Elapsed: 5.135373608s
Aug 26 15:47:35.700: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Pending", Reason="", readiness=false. Elapsed: 7.237287388s
Aug 26 15:47:37.959: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Pending", Reason="", readiness=false. Elapsed: 9.496156979s
Aug 26 15:47:40.043: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.579992197s
STEP: Saw pod success
Aug 26 15:47:40.043: INFO: Pod "pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931" satisfied condition "success or failure"
Aug 26 15:47:40.136: INFO: Trying to get logs from node jerma-worker pod pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931 container secret-volume-test: 
STEP: delete the pod
Aug 26 15:47:40.831: INFO: Waiting for pod pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931 to disappear
Aug 26 15:47:40.919: INFO: Pod pod-secrets-824a0c77-c762-483f-b9f1-7ad5dee65931 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:47:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3438" for this suite.

• [SLOW TEST:15.432 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3689,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:47:40.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:47:42.092: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8" in namespace "security-context-test-909" to be "success or failure"
Aug 26 15:47:42.491: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 398.816338ms
Aug 26 15:47:44.533: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440331267s
Aug 26 15:47:46.545: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452888935s
Aug 26 15:47:48.704: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611755571s
Aug 26 15:47:50.868: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.775155503s
Aug 26 15:47:50.868: INFO: Pod "busybox-user-65534-4c2079c9-144b-4e24-b30d-e09b1a7bc8d8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:47:50.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-909" for this suite.

• [SLOW TEST:10.267 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3693,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:47:51.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:47:51.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 15:48:01.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-926 create -f -'
Aug 26 15:48:06.457: INFO: stderr: ""
Aug 26 15:48:06.457: INFO: stdout: "e2e-test-crd-publish-openapi-4662-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 15:48:06.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-926 delete e2e-test-crd-publish-openapi-4662-crds test-cr'
Aug 26 15:48:07.565: INFO: stderr: ""
Aug 26 15:48:07.566: INFO: stdout: "e2e-test-crd-publish-openapi-4662-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 26 15:48:07.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-926 apply -f -'
Aug 26 15:48:09.023: INFO: stderr: ""
Aug 26 15:48:09.023: INFO: stdout: "e2e-test-crd-publish-openapi-4662-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 15:48:09.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-926 delete e2e-test-crd-publish-openapi-4662-crds test-cr'
Aug 26 15:48:10.125: INFO: stderr: ""
Aug 26 15:48:10.125: INFO: stdout: "e2e-test-crd-publish-openapi-4662-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 15:48:10.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4662-crds'
Aug 26 15:48:11.591: INFO: stderr: ""
Aug 26 15:48:11.591: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4662-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:48:30.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-926" for this suite.

• [SLOW TEST:39.122 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":230,"skipped":3699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:48:30.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 15:48:30.593: INFO: Waiting up to 5m0s for pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4" in namespace "emptydir-5257" to be "success or failure"
Aug 26 15:48:30.602: INFO: Pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.561758ms
Aug 26 15:48:32.607: INFO: Pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014501332s
Aug 26 15:48:34.611: INFO: Pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.018673722s
Aug 26 15:48:36.616: INFO: Pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023198976s
STEP: Saw pod success
Aug 26 15:48:36.616: INFO: Pod "pod-926fcffd-23cf-47ff-be48-452562b4c0a4" satisfied condition "success or failure"
Aug 26 15:48:36.619: INFO: Trying to get logs from node jerma-worker pod pod-926fcffd-23cf-47ff-be48-452562b4c0a4 container test-container: 
STEP: delete the pod
Aug 26 15:48:36.779: INFO: Waiting for pod pod-926fcffd-23cf-47ff-be48-452562b4c0a4 to disappear
Aug 26 15:48:36.818: INFO: Pod pod-926fcffd-23cf-47ff-be48-452562b4c0a4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:48:36.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5257" for this suite.

• [SLOW TEST:6.503 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3721,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:48:36.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 15:48:37.394: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006" in namespace "security-context-test-489" to be "success or failure"
Aug 26 15:48:37.593: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Pending", Reason="", readiness=false. Elapsed: 198.968108ms
Aug 26 15:48:40.575: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180988258s
Aug 26 15:48:42.875: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.480535004s
Aug 26 15:48:45.038: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.643118288s
Aug 26 15:48:47.672: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277124528s
Aug 26 15:48:49.679: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Running", Reason="", readiness=true. Elapsed: 12.284644639s
Aug 26 15:48:51.698: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.303436949s
Aug 26 15:48:51.698: INFO: Pod "busybox-readonly-false-a86edce6-db6d-40a5-a81a-f764772a1006" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:48:51.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-489" for this suite.

• [SLOW TEST:15.621 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3727,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:48:52.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 15:49:04.015: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:49:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2993" for this suite.

• [SLOW TEST:12.924 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3782,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:49:05.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Aug 26 15:49:06.857: INFO: Waiting up to 5m0s for pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0" in namespace "var-expansion-8256" to be "success or failure"
Aug 26 15:49:06.860: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428811ms
Aug 26 15:49:09.104: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246856993s
Aug 26 15:49:12.151: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.293839273s
Aug 26 15:49:14.378: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.521246139s
Aug 26 15:49:16.659: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.802279702s
Aug 26 15:49:18.664: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Running", Reason="", readiness=true. Elapsed: 11.807162931s
Aug 26 15:49:21.215: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.358192301s
STEP: Saw pod success
Aug 26 15:49:21.215: INFO: Pod "var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0" satisfied condition "success or failure"
Aug 26 15:49:21.583: INFO: Trying to get logs from node jerma-worker pod var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0 container dapi-container: 
STEP: delete the pod
Aug 26 15:49:21.985: INFO: Waiting for pod var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0 to disappear
Aug 26 15:49:22.078: INFO: Pod var-expansion-5380e383-ab7d-42a1-8894-d38d797addc0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:49:22.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8256" for this suite.

• [SLOW TEST:16.867 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3789,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:49:22.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 26 15:49:24.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 26 15:49:27.138: INFO: stderr: ""
Aug 26 15:49:27.138: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:49:27.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6395" for this suite.

• [SLOW TEST:5.560 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl api-versions
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:786
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":235,"skipped":3803,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:49:27.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:49:29.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b" in namespace "projected-2407" to be "success or failure"
Aug 26 15:49:30.241: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b": Phase="Pending", Reason="", readiness=false. Elapsed: 724.133933ms
Aug 26 15:49:32.324: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.807549998s
Aug 26 15:49:34.751: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234729421s
Aug 26 15:49:36.996: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b": Phase="Running", Reason="", readiness=true. Elapsed: 7.479207513s
Aug 26 15:49:39.000: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.483600391s
STEP: Saw pod success
Aug 26 15:49:39.000: INFO: Pod "downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b" satisfied condition "success or failure"
Aug 26 15:49:39.004: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b container client-container: 
STEP: delete the pod
Aug 26 15:49:39.463: INFO: Waiting for pod downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b to disappear
Aug 26 15:49:39.641: INFO: Pod downwardapi-volume-ddb4ad47-adb8-41a2-8673-e9c26497578b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:49:39.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2407" for this suite.

• [SLOW TEST:11.878 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3817,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:49:39.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 15:49:40.474: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 15:49:41.314: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 15:49:41.319: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 15:49:41.331: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.331: INFO: 	Container app ready: true, restart count 0
Aug 26 15:49:41.331: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.331: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:49:41.331: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.331: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:49:41.331: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 15:49:41.354: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.354: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:49:41.355: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.355: INFO: 	Container httpd ready: true, restart count 0
Aug 26 15:49:41.355: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.355: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:49:41.355: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:49:41.355: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d400a645-1f06-4db1-8bb7-383c5fd796c4 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d400a645-1f06-4db1-8bb7-383c5fd796c4 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d400a645-1f06-4db1-8bb7-383c5fd796c4
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:49:56.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6392" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.580 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":237,"skipped":3818,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:49:56.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 26 15:50:04.299: INFO: 6 pods remaining
Aug 26 15:50:04.300: INFO: 0 pods has nil DeletionTimestamp
Aug 26 15:50:04.300: INFO: 
Aug 26 15:50:05.753: INFO: 0 pods remaining
Aug 26 15:50:05.753: INFO: 0 pods has nil DeletionTimestamp
Aug 26 15:50:05.753: INFO: 
STEP: Gathering metrics
W0826 15:50:08.432130       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 15:50:08.432: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:50:08.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6064" for this suite.

• [SLOW TEST:13.564 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":238,"skipped":3825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:50:09.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 26 15:50:12.795: INFO: Created pod &Pod{ObjectMeta:{dns-6245  dns-6245 /api/v1/namespaces/dns-6245/pods/dns-6245 2499780e-b505-40e3-8dc5-6e27fb98dae4 3920849 0 2020-08-26 15:50:12 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgvr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgvr2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgvr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 26 15:50:19.446: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6245 PodName:dns-6245 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 15:50:19.446: INFO: >>> kubeConfig: /root/.kube/config
I0826 15:50:20.009719       7 log.go:172] (0xa08d6c0) (0xa08d730) Create stream
I0826 15:50:20.009877       7 log.go:172] (0xa08d6c0) (0xa08d730) Stream added, broadcasting: 1
I0826 15:50:20.012810       7 log.go:172] (0xa08d6c0) Reply frame received for 1
I0826 15:50:20.012943       7 log.go:172] (0xa08d6c0) (0xa08d8f0) Create stream
I0826 15:50:20.013006       7 log.go:172] (0xa08d6c0) (0xa08d8f0) Stream added, broadcasting: 3
I0826 15:50:20.014140       7 log.go:172] (0xa08d6c0) Reply frame received for 3
I0826 15:50:20.014247       7 log.go:172] (0xa08d6c0) (0xaf0e070) Create stream
I0826 15:50:20.014306       7 log.go:172] (0xa08d6c0) (0xaf0e070) Stream added, broadcasting: 5
I0826 15:50:20.015593       7 log.go:172] (0xa08d6c0) Reply frame received for 5
I0826 15:50:20.099586       7 log.go:172] (0xa08d6c0) Data frame received for 3
I0826 15:50:20.099697       7 log.go:172] (0xa08d8f0) (3) Data frame handling
I0826 15:50:20.099780       7 log.go:172] (0xa08d8f0) (3) Data frame sent
I0826 15:50:20.101916       7 log.go:172] (0xa08d6c0) Data frame received for 5
I0826 15:50:20.102037       7 log.go:172] (0xaf0e070) (5) Data frame handling
I0826 15:50:20.102168       7 log.go:172] (0xa08d6c0) Data frame received for 3
I0826 15:50:20.102331       7 log.go:172] (0xa08d8f0) (3) Data frame handling
I0826 15:50:20.103388       7 log.go:172] (0xa08d6c0) Data frame received for 1
I0826 15:50:20.103478       7 log.go:172] (0xa08d730) (1) Data frame handling
I0826 15:50:20.103565       7 log.go:172] (0xa08d730) (1) Data frame sent
I0826 15:50:20.103655       7 log.go:172] (0xa08d6c0) (0xa08d730) Stream removed, broadcasting: 1
I0826 15:50:20.103757       7 log.go:172] (0xa08d6c0) Go away received
I0826 15:50:20.104177       7 log.go:172] (0xa08d6c0) (0xa08d730) Stream removed, broadcasting: 1
I0826 15:50:20.104348       7 log.go:172] (0xa08d6c0) (0xa08d8f0) Stream removed, broadcasting: 3
I0826 15:50:20.104440       7 log.go:172] (0xa08d6c0) (0xaf0e070) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 26 15:50:20.105: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6245 PodName:dns-6245 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 15:50:20.105: INFO: >>> kubeConfig: /root/.kube/config
I0826 15:50:20.211451       7 log.go:172] (0xb848bd0) (0xb848c40) Create stream
I0826 15:50:20.211577       7 log.go:172] (0xb848bd0) (0xb848c40) Stream added, broadcasting: 1
I0826 15:50:20.214501       7 log.go:172] (0xb848bd0) Reply frame received for 1
I0826 15:50:20.214669       7 log.go:172] (0xb848bd0) (0xb848e00) Create stream
I0826 15:50:20.214739       7 log.go:172] (0xb848bd0) (0xb848e00) Stream added, broadcasting: 3
I0826 15:50:20.215894       7 log.go:172] (0xb848bd0) Reply frame received for 3
I0826 15:50:20.216006       7 log.go:172] (0xb848bd0) (0x9803ab0) Create stream
I0826 15:50:20.216076       7 log.go:172] (0xb848bd0) (0x9803ab0) Stream added, broadcasting: 5
I0826 15:50:20.217516       7 log.go:172] (0xb848bd0) Reply frame received for 5
I0826 15:50:20.274933       7 log.go:172] (0xb848bd0) Data frame received for 3
I0826 15:50:20.275054       7 log.go:172] (0xb848e00) (3) Data frame handling
I0826 15:50:20.275171       7 log.go:172] (0xb848e00) (3) Data frame sent
I0826 15:50:20.277142       7 log.go:172] (0xb848bd0) Data frame received for 3
I0826 15:50:20.277246       7 log.go:172] (0xb848e00) (3) Data frame handling
I0826 15:50:20.277351       7 log.go:172] (0xb848bd0) Data frame received for 5
I0826 15:50:20.277460       7 log.go:172] (0x9803ab0) (5) Data frame handling
I0826 15:50:20.278112       7 log.go:172] (0xb848bd0) Data frame received for 1
I0826 15:50:20.278206       7 log.go:172] (0xb848c40) (1) Data frame handling
I0826 15:50:20.278326       7 log.go:172] (0xb848c40) (1) Data frame sent
I0826 15:50:20.278419       7 log.go:172] (0xb848bd0) (0xb848c40) Stream removed, broadcasting: 1
I0826 15:50:20.278521       7 log.go:172] (0xb848bd0) Go away received
I0826 15:50:20.278761       7 log.go:172] (0xb848bd0) (0xb848c40) Stream removed, broadcasting: 1
I0826 15:50:20.278844       7 log.go:172] (0xb848bd0) (0xb848e00) Stream removed, broadcasting: 3
I0826 15:50:20.278892       7 log.go:172] (0xb848bd0) (0x9803ab0) Stream removed, broadcasting: 5
Aug 26 15:50:20.279: INFO: Deleting pod dns-6245...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:50:20.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6245" for this suite.

• [SLOW TEST:10.791 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":239,"skipped":3855,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:50:20.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 15:50:20.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2660'
Aug 26 15:50:22.230: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 15:50:22.230: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 26 15:50:25.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2660'
Aug 26 15:50:27.290: INFO: stderr: ""
Aug 26 15:50:27.291: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:50:27.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2660" for this suite.

• [SLOW TEST:7.453 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":240,"skipped":3863,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:50:28.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 26 15:50:29.700: INFO: namespace kubectl-1408
Aug 26 15:50:29.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1408'
Aug 26 15:50:31.822: INFO: stderr: ""
Aug 26 15:50:31.822: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 15:50:32.857: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:32.857: INFO: Found 0 / 1
Aug 26 15:50:33.864: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:33.864: INFO: Found 0 / 1
Aug 26 15:50:34.840: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:34.840: INFO: Found 0 / 1
Aug 26 15:50:35.828: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:35.828: INFO: Found 0 / 1
Aug 26 15:50:36.827: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:36.828: INFO: Found 1 / 1
Aug 26 15:50:36.828: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 15:50:36.832: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 15:50:36.832: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 15:50:36.832: INFO: wait on agnhost-master startup in kubectl-1408 
Aug 26 15:50:36.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-t9jc4 agnhost-master --namespace=kubectl-1408'
Aug 26 15:50:37.976: INFO: stderr: ""
Aug 26 15:50:37.976: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 26 15:50:37.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1408'
Aug 26 15:50:40.174: INFO: stderr: ""
Aug 26 15:50:40.174: INFO: stdout: "service/rm2 exposed\n"
Aug 26 15:50:40.240: INFO: Service rm2 in namespace kubectl-1408 found.
STEP: exposing service
Aug 26 15:50:42.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1408'
Aug 26 15:50:43.861: INFO: stderr: ""
Aug 26 15:50:43.861: INFO: stdout: "service/rm3 exposed\n"
Aug 26 15:50:43.894: INFO: Service rm3 in namespace kubectl-1408 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:50:45.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1408" for this suite.

• [SLOW TEST:17.831 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":241,"skipped":3870,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:50:45.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 15:50:47.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37" in namespace "projected-6877" to be "success or failure"
Aug 26 15:50:47.589: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37": Phase="Pending", Reason="", readiness=false. Elapsed: 471.19883ms
Aug 26 15:50:49.804: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686398709s
Aug 26 15:50:52.008: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.890649727s
Aug 26 15:50:54.686: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37": Phase="Pending", Reason="", readiness=false. Elapsed: 7.568302156s
Aug 26 15:50:56.703: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.584913237s
STEP: Saw pod success
Aug 26 15:50:56.703: INFO: Pod "downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37" satisfied condition "success or failure"
Aug 26 15:50:56.707: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37 container client-container: 
STEP: delete the pod
Aug 26 15:50:56.769: INFO: Waiting for pod downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37 to disappear
Aug 26 15:50:56.773: INFO: Pod downwardapi-volume-e01ac175-1008-415a-a86c-08eb7fa23c37 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:50:56.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6877" for this suite.

• [SLOW TEST:10.868 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3872,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:50:56.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-424
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-424 to expose endpoints map[]
Aug 26 15:50:56.981: INFO: successfully validated that service endpoint-test2 in namespace services-424 exposes endpoints map[] (9.7349ms elapsed)
STEP: Creating pod pod1 in namespace services-424
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-424 to expose endpoints map[pod1:[80]]
Aug 26 15:51:01.260: INFO: successfully validated that service endpoint-test2 in namespace services-424 exposes endpoints map[pod1:[80]] (4.272488921s elapsed)
STEP: Creating pod pod2 in namespace services-424
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-424 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 26 15:51:07.049: INFO: Unexpected endpoints: found map[ff881497-9a47-48cd-bf28-14fbe7b4a8d4:[80]], expected map[pod1:[80] pod2:[80]] (5.783473595s elapsed, will retry)
Aug 26 15:51:08.095: INFO: successfully validated that service endpoint-test2 in namespace services-424 exposes endpoints map[pod1:[80] pod2:[80]] (6.829824997s elapsed)
STEP: Deleting pod pod1 in namespace services-424
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-424 to expose endpoints map[pod2:[80]]
Aug 26 15:51:08.460: INFO: successfully validated that service endpoint-test2 in namespace services-424 exposes endpoints map[pod2:[80]] (358.185338ms elapsed)
STEP: Deleting pod pod2 in namespace services-424
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-424 to expose endpoints map[]
Aug 26 15:51:08.488: INFO: successfully validated that service endpoint-test2 in namespace services-424 exposes endpoints map[] (22.827034ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:51:08.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-424" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.051 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":243,"skipped":3878,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:51:08.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 15:51:09.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 15:51:09.886: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 15:51:09.891: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 15:51:09.901: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.901: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:51:09.901: INFO: pod1 from services-424 started at 2020-08-26 15:50:57 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.901: INFO: 	Container pause ready: true, restart count 0
Aug 26 15:51:09.901: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.901: INFO: 	Container app ready: true, restart count 0
Aug 26 15:51:09.901: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.901: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:51:09.901: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 15:51:09.916: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.916: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:51:09.916: INFO: pod2 from services-424 started at 2020-08-26 15:51:01 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.916: INFO: 	Container pause ready: true, restart count 0
Aug 26 15:51:09.916: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.917: INFO: 	Container httpd ready: true, restart count 0
Aug 26 15:51:09.917: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.917: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:51:09.917: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:51:09.917: INFO: 	Container app ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-45483a7f-5a40-4491-bebb-3ed962910cdc 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-45483a7f-5a40-4491-bebb-3ed962910cdc off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-45483a7f-5a40-4491-bebb-3ed962910cdc
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:51:44.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5684" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:35.671 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":244,"skipped":3885,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:51:44.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-b4e44b6c-dc57-4296-9862-b6cb9d7b176a in namespace container-probe-7615
Aug 26 15:51:54.105: INFO: Started pod busybox-b4e44b6c-dc57-4296-9862-b6cb9d7b176a in namespace container-probe-7615
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 15:51:54.329: INFO: Initial restart count of pod busybox-b4e44b6c-dc57-4296-9862-b6cb9d7b176a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:55:56.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7615" for this suite.

• [SLOW TEST:252.982 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3903,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:55:57.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 26 15:55:59.801: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:56:00.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7367" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":246,"skipped":3918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:56:00.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 26 15:56:02.855: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 15:56:21.976: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:57:29.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2666" for this suite.

• [SLOW TEST:88.649 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":247,"skipped":3942,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:57:29.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 15:57:29.929: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 15:57:30.216: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 15:57:30.219: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 15:57:30.240: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.240: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:57:30.240: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.240: INFO: 	Container app ready: true, restart count 0
Aug 26 15:57:30.240: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.240: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 15:57:30.240: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 15:57:30.263: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.263: INFO: 	Container httpd ready: true, restart count 0
Aug 26 15:57:30.263: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.263: INFO: 	Container app ready: true, restart count 0
Aug 26 15:57:30.263: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.263: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 15:57:30.263: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 15:57:30.263: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 26 15:57:30.927: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker
Aug 26 15:57:30.927: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2
Aug 26 15:57:30.927: INFO: Pod test-recreate-deployment-5f94c574ff-k4dkm requesting resource cpu=0m on Node jerma-worker2
Aug 26 15:57:30.927: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 26 15:57:30.927: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 26 15:57:30.927: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 26 15:57:30.927: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 26 15:57:30.927: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 26 15:57:31.360: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-40386be3-6744-49a3-98c6-3cc198b83861.162edc954109fe05], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7916/filler-pod-40386be3-6744-49a3-98c6-3cc198b83861 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-40386be3-6744-49a3-98c6-3cc198b83861.162edc95b8b7a469], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-40386be3-6744-49a3-98c6-3cc198b83861.162edc96921ca765], Reason = [Created], Message = [Created container filler-pod-40386be3-6744-49a3-98c6-3cc198b83861]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-40386be3-6744-49a3-98c6-3cc198b83861.162edc96d7b2ceeb], Reason = [Started], Message = [Started container filler-pod-40386be3-6744-49a3-98c6-3cc198b83861]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774.162edc954fddd4e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7916/filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774.162edc95f56898b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774.162edc96f7c5d943], Reason = [Created], Message = [Created container filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774.162edc971ecbe91d], Reason = [Started], Message = [Started container filler-pod-b607d7e7-ac36-4bc1-92e4-abf644766774]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162edc97ad34051f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:57:44.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7916" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:14.891 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":248,"skipped":3964,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:57:44.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6387 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6387;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6387 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6387;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6387.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6387.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6387.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6387.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6387.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6387.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.208.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.208.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.208.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.208.90_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6387 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6387;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6387 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6387;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6387.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6387.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6387.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6387.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6387.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6387.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6387.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6387.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.208.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.208.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.208.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.208.90_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 15:57:54.929: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.933: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.938: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.942: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.952: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.957: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:54.961: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.005: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.009: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.012: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.019: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.024: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.030: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:57:55.048: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:00.085: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.140: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.160: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.336: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.342: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.345: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.353: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.365: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:00.592: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:05.110: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.116: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.381: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.601: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.605: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.608: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.611: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.615: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.619: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.622: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.625: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:05.641: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:10.105: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.110: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.114: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.118: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.121: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.126: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.130: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.134: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.160: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.164: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.167: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.171: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.176: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.184: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:10.210: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:15.054: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.058: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.067: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.075: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.079: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.083: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.109: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.112: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.116: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.123: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.135: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:15.159: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:20.072: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.090: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.094: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.103: INFO: Unable to read wheezy_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.107: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.111: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.181: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.185: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.190: INFO: Unable to read jessie_udp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.194: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387 from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.198: INFO: Unable to read jessie_udp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc from pod dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a: the server could not find the requested resource (get pods dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a)
Aug 26 15:58:20.310: INFO: Lookups using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6387 wheezy_tcp@dns-test-service.dns-6387 wheezy_udp@dns-test-service.dns-6387.svc wheezy_tcp@dns-test-service.dns-6387.svc wheezy_udp@_http._tcp.dns-test-service.dns-6387.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6387.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6387 jessie_tcp@dns-test-service.dns-6387 jessie_udp@dns-test-service.dns-6387.svc jessie_tcp@dns-test-service.dns-6387.svc jessie_udp@_http._tcp.dns-test-service.dns-6387.svc jessie_tcp@_http._tcp.dns-test-service.dns-6387.svc]

Aug 26 15:58:26.002: INFO: DNS probes using dns-6387/dns-test-f88c2ae9-8726-4265-88a7-71a97b088b9a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:58:27.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6387" for this suite.

• [SLOW TEST:42.881 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":249,"skipped":3970,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:58:27.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-cb83d2a8-34dc-41ec-add1-ca6c06ead4b6
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-cb83d2a8-34dc-41ec-add1-ca6c06ead4b6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 15:59:38.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9812" for this suite.

• [SLOW TEST:70.805 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3971,"failed":0}
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 15:59:38.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 in namespace container-probe-8253
Aug 26 15:59:44.344: INFO: Started pod liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 in namespace container-probe-8253
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 15:59:44.349: INFO: Initial restart count of pod liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is 0
Aug 26 16:00:02.893: INFO: Restart count of pod container-probe-8253/liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is now 1 (18.544439779s elapsed)
Aug 26 16:00:24.228: INFO: Restart count of pod container-probe-8253/liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is now 2 (39.879145205s elapsed)
Aug 26 16:00:44.939: INFO: Restart count of pod container-probe-8253/liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is now 3 (1m0.590065784s elapsed)
Aug 26 16:01:06.274: INFO: Restart count of pod container-probe-8253/liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is now 4 (1m21.925647208s elapsed)
Aug 26 16:02:13.516: INFO: Restart count of pod container-probe-8253/liveness-ae06194c-fb4f-4884-9681-f2aeaeac8fa3 is now 5 (2m29.166962451s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:02:14.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8253" for this suite.

• [SLOW TEST:156.574 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":3971,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:02:14.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-1fdada63-d039-464e-b838-c9d68069696c
STEP: Creating a pod to test consume configMaps
Aug 26 16:02:15.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876" in namespace "configmap-9586" to be "success or failure"
Aug 26 16:02:15.984: INFO: Pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876": Phase="Pending", Reason="", readiness=false. Elapsed: 344.89046ms
Aug 26 16:02:18.093: INFO: Pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454298103s
Aug 26 16:02:20.099: INFO: Pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4604102s
Aug 26 16:02:22.106: INFO: Pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.466861206s
STEP: Saw pod success
Aug 26 16:02:22.106: INFO: Pod "pod-configmaps-c791c597-287c-4301-9a3e-8711defae876" satisfied condition "success or failure"
Aug 26 16:02:22.306: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c791c597-287c-4301-9a3e-8711defae876 container configmap-volume-test: 
STEP: delete the pod
Aug 26 16:02:22.601: INFO: Waiting for pod pod-configmaps-c791c597-287c-4301-9a3e-8711defae876 to disappear
Aug 26 16:02:22.646: INFO: Pod pod-configmaps-c791c597-287c-4301-9a3e-8711defae876 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:02:22.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9586" for this suite.

• [SLOW TEST:7.944 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4018,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:02:22.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 16:02:22.902: INFO: PodSpec: initContainers in spec.initContainers
Aug 26 16:03:23.041: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-70a2dcb1-e050-40b8-91f0-c2638a8d77c7", GenerateName:"", Namespace:"init-container-871", SelfLink:"/api/v1/namespaces/init-container-871/pods/pod-init-70a2dcb1-e050-40b8-91f0-c2638a8d77c7", UID:"513aaa1f-bfa7-4d61-998a-11c8cc4d02e2", ResourceVersion:"3923506", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734054542, loc:(*time.Location)(0x610c660)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"901834192"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hcfmp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x9c9e080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hcfmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hcfmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hcfmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x9b68068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x8016300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9b68100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9b68120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x9b68128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x9b6812c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054543, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054543, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054543, loc:(*time.Location)(0x610c660)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054542, loc:(*time.Location)(0x610c660)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.219", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.219"}}, StartTime:(*v1.Time)(0x9c9e160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x9c9e180), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x8b7e280)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://95048cc20f375a56efbb3134fa32cef9e9792741df59d1af85423c9a6d21d445", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x9dda030), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x9dda020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x9b681af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:03:23.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-871" for this suite.

• [SLOW TEST:61.655 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":253,"skipped":4053,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:03:24.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:03:27.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8" in namespace "downward-api-7095" to be "success or failure"
Aug 26 16:03:27.529: INFO: Pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 411.162807ms
Aug 26 16:03:29.737: INFO: Pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619675789s
Aug 26 16:03:31.929: INFO: Pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811793522s
Aug 26 16:03:34.013: INFO: Pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.895295003s
STEP: Saw pod success
Aug 26 16:03:34.013: INFO: Pod "downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8" satisfied condition "success or failure"
Aug 26 16:03:34.017: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8 container client-container: 
STEP: delete the pod
Aug 26 16:03:34.347: INFO: Waiting for pod downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8 to disappear
Aug 26 16:03:34.409: INFO: Pod downwardapi-volume-9b25258a-5c76-4be1-85bc-cabebe276ae8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:03:34.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7095" for this suite.

• [SLOW TEST:10.126 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4061,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:03:34.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9cvr9 in namespace proxy-4244
I0826 16:03:35.187817       7 runners.go:189] Created replication controller with name: proxy-service-9cvr9, namespace: proxy-4244, replica count: 1
I0826 16:03:36.240041       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:03:37.240636       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:03:38.241479       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:03:39.242195       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:03:40.242949       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:03:41.243633       7 runners.go:189] proxy-service-9cvr9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 16:03:41.256: INFO: setup took 6.406613988s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 26 16:03:41.267: INFO: (0) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 9.522198ms)
Aug 26 16:03:41.267: INFO: (0) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 10.421342ms)
Aug 26 16:03:41.267: INFO: (0) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 10.228767ms)
Aug 26 16:03:41.267: INFO: (0) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 10.38067ms)
Aug 26 16:03:41.267: INFO: (0) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 10.315177ms)
Aug 26 16:03:41.268: INFO: (0) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 10.267292ms)
Aug 26 16:03:41.273: INFO: (0) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 16.005425ms)
Aug 26 16:03:41.273: INFO: (0) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 16.149063ms)
Aug 26 16:03:41.273: INFO: (0) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 16.686738ms)
Aug 26 16:03:41.273: INFO: (0) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 16.197862ms)
Aug 26 16:03:41.274: INFO: (0) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 16.606053ms)
Aug 26 16:03:41.274: INFO: (0) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 7.744551ms)
Aug 26 16:03:41.285: INFO: (1) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 8.284808ms)
Aug 26 16:03:41.286: INFO: (1) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 9.371206ms)
Aug 26 16:03:41.287: INFO: (1) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 9.636063ms)
Aug 26 16:03:41.287: INFO: (1) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 9.925097ms)
Aug 26 16:03:41.287: INFO: (1) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 9.906299ms)
Aug 26 16:03:41.287: INFO: (1) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 10.177916ms)
Aug 26 16:03:41.287: INFO: (1) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test<... (200; 10.686006ms)
Aug 26 16:03:41.288: INFO: (1) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 10.58381ms)
Aug 26 16:03:41.288: INFO: (1) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 10.830071ms)
Aug 26 16:03:41.288: INFO: (1) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 11.175231ms)
Aug 26 16:03:41.289: INFO: (1) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 11.247862ms)
Aug 26 16:03:41.289: INFO: (1) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 11.904951ms)
Aug 26 16:03:41.289: INFO: (1) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 12.122648ms)
Aug 26 16:03:41.294: INFO: (2) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 4.635546ms)
Aug 26 16:03:41.295: INFO: (2) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 5.750389ms)
Aug 26 16:03:41.296: INFO: (2) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 5.879496ms)
Aug 26 16:03:41.297: INFO: (2) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 9.130688ms)
Aug 26 16:03:41.299: INFO: (2) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 9.320562ms)
Aug 26 16:03:41.300: INFO: (2) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 9.716437ms)
Aug 26 16:03:41.300: INFO: (2) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 10.100166ms)
Aug 26 16:03:41.300: INFO: (2) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 10.660989ms)
Aug 26 16:03:41.303: INFO: (2) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 12.486609ms)
Aug 26 16:03:41.303: INFO: (2) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 12.421444ms)
Aug 26 16:03:41.303: INFO: (2) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 12.530233ms)
Aug 26 16:03:41.307: INFO: (3) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 3.753914ms)
Aug 26 16:03:41.309: INFO: (3) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.12176ms)
Aug 26 16:03:41.309: INFO: (3) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.108125ms)
Aug 26 16:03:41.309: INFO: (3) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 6.22491ms)
Aug 26 16:03:41.309: INFO: (3) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 6.269758ms)
Aug 26 16:03:41.309: INFO: (3) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 6.328744ms)
Aug 26 16:03:41.310: INFO: (3) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 6.467315ms)
Aug 26 16:03:41.310: INFO: (3) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.647336ms)
Aug 26 16:03:41.310: INFO: (3) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 6.959854ms)
Aug 26 16:03:41.310: INFO: (3) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 6.903371ms)
Aug 26 16:03:41.310: INFO: (3) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 7.121763ms)
Aug 26 16:03:41.311: INFO: (3) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 6.017326ms)
Aug 26 16:03:41.318: INFO: (4) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 5.556895ms)
Aug 26 16:03:41.318: INFO: (4) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 5.65869ms)
Aug 26 16:03:41.319: INFO: (4) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.119611ms)
Aug 26 16:03:41.319: INFO: (4) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 6.542252ms)
Aug 26 16:03:41.319: INFO: (4) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.350525ms)
Aug 26 16:03:41.319: INFO: (4) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 6.69189ms)
Aug 26 16:03:41.319: INFO: (4) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 6.567349ms)
Aug 26 16:03:41.320: INFO: (4) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 7.196328ms)
Aug 26 16:03:41.321: INFO: (4) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 8.351739ms)
Aug 26 16:03:41.322: INFO: (4) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 8.905305ms)
Aug 26 16:03:41.322: INFO: (4) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 9.09882ms)
Aug 26 16:03:41.323: INFO: (4) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 10.394655ms)
Aug 26 16:03:41.323: INFO: (4) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 10.166665ms)
Aug 26 16:03:41.330: INFO: (5) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 7.152234ms)
Aug 26 16:03:41.331: INFO: (5) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 7.203373ms)
Aug 26 16:03:41.331: INFO: (5) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 7.355496ms)
Aug 26 16:03:41.331: INFO: (5) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 7.3843ms)
Aug 26 16:03:41.331: INFO: (5) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test<... (200; 7.749406ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 8.469625ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 8.396903ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 8.700515ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 8.764067ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 9.038603ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 9.273408ms)
Aug 26 16:03:41.332: INFO: (5) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 9.075908ms)
Aug 26 16:03:41.336: INFO: (6) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 3.867854ms)
Aug 26 16:03:41.337: INFO: (6) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 4.917897ms)
Aug 26 16:03:41.338: INFO: (6) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 5.085923ms)
Aug 26 16:03:41.338: INFO: (6) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 5.676208ms)
Aug 26 16:03:41.339: INFO: (6) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.518956ms)
Aug 26 16:03:41.339: INFO: (6) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.63703ms)
Aug 26 16:03:41.340: INFO: (6) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 7.159334ms)
Aug 26 16:03:41.341: INFO: (6) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 8.475259ms)
Aug 26 16:03:41.342: INFO: (6) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 8.883664ms)
Aug 26 16:03:41.342: INFO: (6) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 8.835995ms)
Aug 26 16:03:41.342: INFO: (6) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 8.753038ms)
Aug 26 16:03:41.342: INFO: (6) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 8.945536ms)
Aug 26 16:03:41.343: INFO: (6) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 9.634652ms)
Aug 26 16:03:41.343: INFO: (6) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 10.243882ms)
Aug 26 16:03:41.345: INFO: (6) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 12.160001ms)
Aug 26 16:03:41.350: INFO: (7) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 4.23336ms)
Aug 26 16:03:41.350: INFO: (7) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 4.180163ms)
Aug 26 16:03:41.351: INFO: (7) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 5.794705ms)
Aug 26 16:03:41.352: INFO: (7) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 6.252651ms)
Aug 26 16:03:41.352: INFO: (7) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 6.629949ms)
Aug 26 16:03:41.353: INFO: (7) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 7.665676ms)
Aug 26 16:03:41.354: INFO: (7) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 7.885968ms)
Aug 26 16:03:41.354: INFO: (7) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 6.868698ms)
Aug 26 16:03:41.363: INFO: (8) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 6.795246ms)
Aug 26 16:03:41.364: INFO: (8) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 7.06437ms)
Aug 26 16:03:41.364: INFO: (8) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 7.206034ms)
Aug 26 16:03:41.364: INFO: (8) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 7.253382ms)
Aug 26 16:03:41.364: INFO: (8) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 7.696615ms)
Aug 26 16:03:41.364: INFO: (8) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 7.637009ms)
Aug 26 16:03:41.369: INFO: (9) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 3.788539ms)
Aug 26 16:03:41.369: INFO: (9) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 4.295982ms)
Aug 26 16:03:41.369: INFO: (9) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 4.51491ms)
Aug 26 16:03:41.369: INFO: (9) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 4.631752ms)
Aug 26 16:03:41.369: INFO: (9) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 4.820555ms)
Aug 26 16:03:41.370: INFO: (9) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 5.22564ms)
Aug 26 16:03:41.370: INFO: (9) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 5.457131ms)
Aug 26 16:03:41.370: INFO: (9) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 5.828786ms)
Aug 26 16:03:41.371: INFO: (9) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 6.821704ms)
Aug 26 16:03:41.372: INFO: (9) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 7.097895ms)
Aug 26 16:03:41.372: INFO: (9) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 9.796268ms)
Aug 26 16:03:41.378: INFO: (10) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 3.202209ms)
Aug 26 16:03:41.379: INFO: (10) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 3.414953ms)
Aug 26 16:03:41.379: INFO: (10) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 4.016042ms)
Aug 26 16:03:41.380: INFO: (10) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 5.005053ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 5.539506ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 5.674785ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 6.200854ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 5.943201ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 6.086826ms)
Aug 26 16:03:41.381: INFO: (10) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 6.043787ms)
Aug 26 16:03:41.382: INFO: (10) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 7.201287ms)
Aug 26 16:03:41.383: INFO: (10) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 8.121583ms)
Aug 26 16:03:41.384: INFO: (10) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 8.749292ms)
Aug 26 16:03:41.384: INFO: (10) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 8.578834ms)
Aug 26 16:03:41.388: INFO: (11) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 4.231251ms)
Aug 26 16:03:41.388: INFO: (11) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 4.309099ms)
Aug 26 16:03:41.389: INFO: (11) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 5.273231ms)
Aug 26 16:03:41.390: INFO: (11) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 5.413917ms)
Aug 26 16:03:41.390: INFO: (11) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 6.338219ms)
Aug 26 16:03:41.391: INFO: (11) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.403528ms)
Aug 26 16:03:41.391: INFO: (11) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 6.474986ms)
Aug 26 16:03:41.391: INFO: (11) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 6.481878ms)
Aug 26 16:03:41.391: INFO: (11) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 6.617635ms)
Aug 26 16:03:41.391: INFO: (11) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 6.646438ms)
Aug 26 16:03:41.392: INFO: (11) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 7.500835ms)
Aug 26 16:03:41.394: INFO: (11) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 9.263394ms)
Aug 26 16:03:41.394: INFO: (11) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 9.349298ms)
Aug 26 16:03:41.398: INFO: (12) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 4.220504ms)
Aug 26 16:03:41.399: INFO: (12) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 4.46399ms)
Aug 26 16:03:41.399: INFO: (12) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 4.618634ms)
Aug 26 16:03:41.399: INFO: (12) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 4.855885ms)
Aug 26 16:03:41.400: INFO: (12) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 5.801682ms)
Aug 26 16:03:41.400: INFO: (12) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 6.121187ms)
Aug 26 16:03:41.400: INFO: (12) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 6.091854ms)
Aug 26 16:03:41.401: INFO: (12) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 6.371394ms)
Aug 26 16:03:41.402: INFO: (12) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 7.379978ms)
Aug 26 16:03:41.403: INFO: (12) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 8.354136ms)
Aug 26 16:03:41.403: INFO: (12) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 8.640825ms)
Aug 26 16:03:41.403: INFO: (12) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 9.181199ms)
Aug 26 16:03:41.409: INFO: (13) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 5.263421ms)
Aug 26 16:03:41.409: INFO: (13) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 5.55732ms)
Aug 26 16:03:41.409: INFO: (13) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 5.520994ms)
Aug 26 16:03:41.409: INFO: (13) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 5.690791ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 5.766994ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 6.058777ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 6.476528ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.48887ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.508442ms)
Aug 26 16:03:41.410: INFO: (13) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 6.728013ms)
Aug 26 16:03:41.411: INFO: (13) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 6.881718ms)
Aug 26 16:03:41.411: INFO: (13) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 7.351061ms)
Aug 26 16:03:41.411: INFO: (13) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 7.454718ms)
Aug 26 16:03:41.417: INFO: (14) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 5.577992ms)
Aug 26 16:03:41.418: INFO: (14) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 6.754984ms)
Aug 26 16:03:41.418: INFO: (14) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.900237ms)
Aug 26 16:03:41.419: INFO: (14) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 7.155765ms)
Aug 26 16:03:41.419: INFO: (14) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 7.249136ms)
Aug 26 16:03:41.419: INFO: (14) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 7.47107ms)
Aug 26 16:03:41.419: INFO: (14) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 7.785714ms)
Aug 26 16:03:41.419: INFO: (14) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 7.90666ms)
Aug 26 16:03:41.420: INFO: (14) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 7.867847ms)
Aug 26 16:03:41.420: INFO: (14) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 8.020065ms)
Aug 26 16:03:41.420: INFO: (14) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 8.716535ms)
Aug 26 16:03:41.420: INFO: (14) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 8.80515ms)
Aug 26 16:03:41.421: INFO: (14) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 8.836446ms)
Aug 26 16:03:41.421: INFO: (14) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test<... (200; 4.425953ms)
Aug 26 16:03:41.426: INFO: (15) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 4.810429ms)
Aug 26 16:03:41.426: INFO: (15) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 5.297044ms)
Aug 26 16:03:41.427: INFO: (15) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 4.85484ms)
Aug 26 16:03:41.427: INFO: (15) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 5.500959ms)
Aug 26 16:03:41.427: INFO: (15) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 6.238779ms)
Aug 26 16:03:41.428: INFO: (15) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 6.505981ms)
Aug 26 16:03:41.428: INFO: (15) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.594384ms)
Aug 26 16:03:41.428: INFO: (15) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 6.704285ms)
Aug 26 16:03:41.428: INFO: (15) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 7.087572ms)
Aug 26 16:03:41.432: INFO: (16) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 3.378449ms)
Aug 26 16:03:41.434: INFO: (16) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 5.842335ms)
Aug 26 16:03:41.434: INFO: (16) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 5.803602ms)
Aug 26 16:03:41.434: INFO: (16) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 5.791454ms)
Aug 26 16:03:41.435: INFO: (16) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 5.653287ms)
Aug 26 16:03:41.435: INFO: (16) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 5.688002ms)
Aug 26 16:03:41.435: INFO: (16) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test (200; 5.765751ms)
Aug 26 16:03:41.435: INFO: (16) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.524596ms)
Aug 26 16:03:41.435: INFO: (16) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.633251ms)
Aug 26 16:03:41.436: INFO: (16) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 6.823693ms)
Aug 26 16:03:41.436: INFO: (16) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 6.879281ms)
Aug 26 16:03:41.436: INFO: (16) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 6.98161ms)
Aug 26 16:03:41.438: INFO: (16) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 8.677645ms)
Aug 26 16:03:41.442: INFO: (16) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 12.923251ms)
Aug 26 16:03:41.446: INFO: (17) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 4.584138ms)
Aug 26 16:03:41.447: INFO: (17) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 5.066558ms)
Aug 26 16:03:41.448: INFO: (17) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 5.641127ms)
Aug 26 16:03:41.448: INFO: (17) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 5.76869ms)
Aug 26 16:03:41.449: INFO: (17) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: ... (200; 7.346533ms)
Aug 26 16:03:41.449: INFO: (17) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 7.252895ms)
Aug 26 16:03:41.449: INFO: (17) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 7.339506ms)
Aug 26 16:03:41.450: INFO: (17) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 7.501422ms)
Aug 26 16:03:41.450: INFO: (17) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 7.421801ms)
Aug 26 16:03:41.450: INFO: (17) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 8.17535ms)
Aug 26 16:03:41.451: INFO: (17) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:1080/proxy/: test<... (200; 8.311408ms)
Aug 26 16:03:41.451: INFO: (17) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 8.381473ms)
Aug 26 16:03:41.454: INFO: (18) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 3.236419ms)
Aug 26 16:03:41.455: INFO: (18) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test<... (200; 5.523235ms)
Aug 26 16:03:41.457: INFO: (18) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 6.25327ms)
Aug 26 16:03:41.457: INFO: (18) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname2/proxy/: tls qux (200; 6.213526ms)
Aug 26 16:03:41.457: INFO: (18) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.322076ms)
Aug 26 16:03:41.457: INFO: (18) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 6.442239ms)
Aug 26 16:03:41.458: INFO: (18) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.662763ms)
Aug 26 16:03:41.458: INFO: (18) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 6.773312ms)
Aug 26 16:03:41.458: INFO: (18) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 6.971614ms)
Aug 26 16:03:41.458: INFO: (18) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 7.096301ms)
Aug 26 16:03:41.458: INFO: (18) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 7.225677ms)
Aug 26 16:03:41.459: INFO: (18) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 7.55682ms)
Aug 26 16:03:41.463: INFO: (19) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:1080/proxy/: ... (200; 3.60619ms)
Aug 26 16:03:41.463: INFO: (19) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname2/proxy/: bar (200; 4.193507ms)
Aug 26 16:03:41.463: INFO: (19) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 4.193086ms)
Aug 26 16:03:41.463: INFO: (19) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:462/proxy/: tls qux (200; 4.304401ms)
Aug 26 16:03:41.464: INFO: (19) /api/v1/namespaces/proxy-4244/services/http:proxy-service-9cvr9:portname1/proxy/: foo (200; 4.815104ms)
Aug 26 16:03:41.465: INFO: (19) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7/proxy/: test (200; 5.768622ms)
Aug 26 16:03:41.465: INFO: (19) /api/v1/namespaces/proxy-4244/services/https:proxy-service-9cvr9:tlsportname1/proxy/: tls baz (200; 5.867089ms)
Aug 26 16:03:41.465: INFO: (19) /api/v1/namespaces/proxy-4244/pods/proxy-service-9cvr9-2wkn7:162/proxy/: bar (200; 6.111062ms)
Aug 26 16:03:41.466: INFO: (19) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname1/proxy/: foo (200; 6.47895ms)
Aug 26 16:03:41.466: INFO: (19) /api/v1/namespaces/proxy-4244/pods/http:proxy-service-9cvr9-2wkn7:160/proxy/: foo (200; 6.488497ms)
Aug 26 16:03:41.466: INFO: (19) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:443/proxy/: test<... (200; 7.116314ms)
Aug 26 16:03:41.467: INFO: (19) /api/v1/namespaces/proxy-4244/pods/https:proxy-service-9cvr9-2wkn7:460/proxy/: tls baz (200; 7.416534ms)
Aug 26 16:03:41.468: INFO: (19) /api/v1/namespaces/proxy-4244/services/proxy-service-9cvr9:portname2/proxy/: bar (200; 8.829031ms)
STEP: deleting ReplicationController proxy-service-9cvr9 in namespace proxy-4244, will wait for the garbage collector to delete the pods
Aug 26 16:03:41.531: INFO: Deleting ReplicationController proxy-service-9cvr9 took: 9.348006ms
Aug 26 16:03:41.832: INFO: Terminating ReplicationController proxy-service-9cvr9 pods took: 301.082943ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:03:52.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4244" for this suite.

• [SLOW TEST:18.300 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":255,"skipped":4062,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:03:52.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:04:03.207: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:04:05.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:04:07.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:04:10.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:04:12.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:04:14.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:04:15.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054643, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054644, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054642, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:04:19.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:04:22.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6601" for this suite.
STEP: Destroying namespace "webhook-6601-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:30.504 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":256,"skipped":4069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:04:23.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 26 16:04:23.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 26 16:05:38.368: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 16:05:57.700: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:06:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9119" for this suite.

• [SLOW TEST:151.819 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":257,"skipped":4123,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:06:55.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 16:07:03.248: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 16:07:03.261: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 16:07:05.261: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 16:07:05.268: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 16:07:07.261: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 16:07:07.291: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 16:07:09.261: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 16:07:09.272: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:07:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6749" for this suite.

• [SLOW TEST:14.206 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4156,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:07:09.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 26 16:07:20.195: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8419 PodName:pod-sharedvolume-7b25c603-dda6-447f-9d4b-4f018faa4703 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:07:20.195: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:07:20.303274       7 log.go:172] (0x7456f50) (0x7457030) Create stream
I0826 16:07:20.303508       7 log.go:172] (0x7456f50) (0x7457030) Stream added, broadcasting: 1
I0826 16:07:20.307173       7 log.go:172] (0x7456f50) Reply frame received for 1
I0826 16:07:20.307369       7 log.go:172] (0x7456f50) (0x7457500) Create stream
I0826 16:07:20.307461       7 log.go:172] (0x7456f50) (0x7457500) Stream added, broadcasting: 3
I0826 16:07:20.309116       7 log.go:172] (0x7456f50) Reply frame received for 3
I0826 16:07:20.309344       7 log.go:172] (0x7456f50) (0x74577a0) Create stream
I0826 16:07:20.309464       7 log.go:172] (0x7456f50) (0x74577a0) Stream added, broadcasting: 5
I0826 16:07:20.311320       7 log.go:172] (0x7456f50) Reply frame received for 5
I0826 16:07:20.380635       7 log.go:172] (0x7456f50) Data frame received for 3
I0826 16:07:20.380921       7 log.go:172] (0x7457500) (3) Data frame handling
I0826 16:07:20.381084       7 log.go:172] (0x7456f50) Data frame received for 5
I0826 16:07:20.381261       7 log.go:172] (0x74577a0) (5) Data frame handling
I0826 16:07:20.381365       7 log.go:172] (0x7457500) (3) Data frame sent
I0826 16:07:20.381500       7 log.go:172] (0x7456f50) Data frame received for 3
I0826 16:07:20.381594       7 log.go:172] (0x7457500) (3) Data frame handling
I0826 16:07:20.381992       7 log.go:172] (0x7456f50) Data frame received for 1
I0826 16:07:20.382104       7 log.go:172] (0x7457030) (1) Data frame handling
I0826 16:07:20.382239       7 log.go:172] (0x7457030) (1) Data frame sent
I0826 16:07:20.382371       7 log.go:172] (0x7456f50) (0x7457030) Stream removed, broadcasting: 1
I0826 16:07:20.382520       7 log.go:172] (0x7456f50) Go away received
I0826 16:07:20.382858       7 log.go:172] (0x7456f50) (0x7457030) Stream removed, broadcasting: 1
I0826 16:07:20.382978       7 log.go:172] (0x7456f50) (0x7457500) Stream removed, broadcasting: 3
I0826 16:07:20.383080       7 log.go:172] (0x7456f50) (0x74577a0) Stream removed, broadcasting: 5
Aug 26 16:07:20.383: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:07:20.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8419" for this suite.

• [SLOW TEST:11.131 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":259,"skipped":4170,"failed":0}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:07:20.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 16:07:25.249: INFO: Successfully updated pod "labelsupdate198a5860-4856-48b8-b0a0-199e95ef84eb"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:07:29.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7885" for this suite.

• [SLOW TEST:8.969 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4170,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:07:29.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 26 16:07:29.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2240 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 26 16:07:37.852: INFO: stderr: ""
Aug 26 16:07:37.852: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 26 16:07:37.852: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 26 16:07:37.852: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2240" to be "running and ready, or succeeded"
Aug 26 16:07:37.914: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 61.378792ms
Aug 26 16:07:40.992: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.139545498s
Aug 26 16:07:43.000: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14721231s
Aug 26 16:07:45.006: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 7.153796775s
Aug 26 16:07:45.007: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 26 16:07:45.007: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 26 16:07:45.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240'
Aug 26 16:07:47.466: INFO: stderr: ""
Aug 26 16:07:47.467: INFO: stdout: "I0826 16:07:42.498939       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/mkch 518\nI0826 16:07:42.699215       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/ks7 252\nI0826 16:07:42.899126       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/ddk5 375\nI0826 16:07:43.099115       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mhrj 453\nI0826 16:07:43.299137       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/gnp 501\nI0826 16:07:43.499153       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/b8m 323\nI0826 16:07:43.699085       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/mb8 482\nI0826 16:07:43.899120       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/qkfz 386\nI0826 16:07:44.099090       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/t5jd 441\nI0826 16:07:44.299193       1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/vl7z 225\nI0826 16:07:44.499199       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/xwc 395\nI0826 16:07:44.699177       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/2hmn 559\nI0826 16:07:44.899152       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/mw8h 560\nI0826 16:07:45.099123       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/ktcn 293\nI0826 16:07:45.299120       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/xsg 499\nI0826 16:07:45.499114       1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/wfj 545\nI0826 16:07:45.699077       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/5kdb 342\nI0826 16:07:45.899083       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/2psh 444\nI0826 16:07:46.099109       1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/cg9h 522\nI0826 16:07:46.299111       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/482 262\nI0826 16:07:46.499113       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rksb 558\nI0826 16:07:46.699155       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/qll 215\nI0826 16:07:46.899131       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/9fr 356\nI0826 16:07:47.099100       1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/qctn 229\nI0826 16:07:47.299147       1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/gvx7 344\n"
STEP: limiting log lines
Aug 26 16:07:47.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240 --tail=1'
Aug 26 16:07:48.672: INFO: stderr: ""
Aug 26 16:07:48.672: INFO: stdout: "I0826 16:07:48.499106       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/snv 490\n"
Aug 26 16:07:48.673: INFO: got output "I0826 16:07:48.499106       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/snv 490\n"
STEP: limiting log bytes
Aug 26 16:07:48.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240 --limit-bytes=1'
Aug 26 16:07:49.911: INFO: stderr: ""
Aug 26 16:07:49.911: INFO: stdout: "I"
Aug 26 16:07:49.912: INFO: got output "I"
STEP: exposing timestamps
Aug 26 16:07:49.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240 --tail=1 --timestamps'
Aug 26 16:07:51.089: INFO: stderr: ""
Aug 26 16:07:51.089: INFO: stdout: "2020-08-26T16:07:50.899280854Z I0826 16:07:50.899116       1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/mxw 442\n"
Aug 26 16:07:51.090: INFO: got output "2020-08-26T16:07:50.899280854Z I0826 16:07:50.899116       1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/mxw 442\n"
STEP: restricting to a time range
Aug 26 16:07:53.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240 --since=1s'
Aug 26 16:07:54.773: INFO: stderr: ""
Aug 26 16:07:54.773: INFO: stdout: "I0826 16:07:53.899073       1 logs_generator.go:76] 57 PUT /api/v1/namespaces/kube-system/pods/2pv 446\nI0826 16:07:54.099079       1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/n6z 391\nI0826 16:07:54.299107       1 logs_generator.go:76] 59 POST /api/v1/namespaces/default/pods/569r 568\nI0826 16:07:54.499085       1 logs_generator.go:76] 60 PUT /api/v1/namespaces/ns/pods/28r 435\nI0826 16:07:54.699087       1 logs_generator.go:76] 61 POST /api/v1/namespaces/default/pods/zgz 490\n"
Aug 26 16:07:54.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2240 --since=24h'
Aug 26 16:07:56.031: INFO: stderr: ""
Aug 26 16:07:56.031: INFO: stdout: "I0826 16:07:42.498939       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/mkch 518\nI0826 16:07:42.699215       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/ks7 252\nI0826 16:07:42.899126       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/ddk5 375\nI0826 16:07:43.099115       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mhrj 453\nI0826 16:07:43.299137       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/gnp 501\nI0826 16:07:43.499153       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/b8m 323\nI0826 16:07:43.699085       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/mb8 482\nI0826 16:07:43.899120       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/qkfz 386\nI0826 16:07:44.099090       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/t5jd 441\nI0826 16:07:44.299193       1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/vl7z 225\nI0826 16:07:44.499199       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/xwc 395\nI0826 16:07:44.699177       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/2hmn 559\nI0826 16:07:44.899152       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/mw8h 560\nI0826 16:07:45.099123       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/ktcn 293\nI0826 16:07:45.299120       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/xsg 499\nI0826 16:07:45.499114       1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/wfj 545\nI0826 16:07:45.699077       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/5kdb 342\nI0826 16:07:45.899083       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/2psh 444\nI0826 16:07:46.099109       1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/cg9h 522\nI0826 16:07:46.299111       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/482 262\nI0826 16:07:46.499113       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rksb 558\nI0826 16:07:46.699155       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/qll 215\nI0826 16:07:46.899131       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/9fr 356\nI0826 16:07:47.099100       1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/qctn 229\nI0826 16:07:47.299147       1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/gvx7 344\nI0826 16:07:47.499085       1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/4464 571\nI0826 16:07:47.699050       1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/djd 591\nI0826 16:07:47.899080       1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/sbg8 539\nI0826 16:07:48.099091       1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/bhh4 528\nI0826 16:07:48.299146       1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/qzgf 298\nI0826 16:07:48.499106       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/snv 490\nI0826 16:07:48.699082       1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/xj95 391\nI0826 16:07:48.899082       1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/q2t 205\nI0826 16:07:49.099083       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/xmpk 293\nI0826 16:07:49.299055       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/hwv 520\nI0826 16:07:49.499084       1 logs_generator.go:76] 35 GET /api/v1/namespaces/default/pods/dcqq 271\nI0826 16:07:49.699112       1 logs_generator.go:76] 36 GET /api/v1/namespaces/ns/pods/zvnf 354\nI0826 16:07:49.899099       1 logs_generator.go:76] 37 PUT /api/v1/namespaces/default/pods/6gkh 312\nI0826 16:07:50.099104       1 logs_generator.go:76] 38 POST /api/v1/namespaces/kube-system/pods/tt8 450\nI0826 16:07:50.299110       1 logs_generator.go:76] 39 POST /api/v1/namespaces/ns/pods/rqm9 587\nI0826 16:07:50.499147       1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/xtw 513\nI0826 16:07:50.699110       1 logs_generator.go:76] 41 PUT /api/v1/namespaces/default/pods/5xx9 326\nI0826 16:07:50.899116       1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/mxw 442\nI0826 16:07:51.099081       1 logs_generator.go:76] 43 PUT /api/v1/namespaces/ns/pods/v8w 400\nI0826 16:07:51.299154       1 logs_generator.go:76] 44 POST /api/v1/namespaces/ns/pods/8k4 535\nI0826 16:07:51.499199       1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/h8c 315\nI0826 16:07:51.699126       1 logs_generator.go:76] 46 POST /api/v1/namespaces/kube-system/pods/dcvg 520\nI0826 16:07:51.899096       1 logs_generator.go:76] 47 GET /api/v1/namespaces/ns/pods/rhhq 585\nI0826 16:07:52.099095       1 logs_generator.go:76] 48 GET /api/v1/namespaces/default/pods/rzc 335\nI0826 16:07:52.299110       1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/sx5 270\nI0826 16:07:52.499169       1 logs_generator.go:76] 50 POST /api/v1/namespaces/default/pods/77k 360\nI0826 16:07:52.699104       1 logs_generator.go:76] 51 PUT /api/v1/namespaces/kube-system/pods/gtzd 222\nI0826 16:07:52.899135       1 logs_generator.go:76] 52 POST /api/v1/namespaces/kube-system/pods/qnk9 458\nI0826 16:07:53.099093       1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/rrb 365\nI0826 16:07:53.299111       1 logs_generator.go:76] 54 GET /api/v1/namespaces/ns/pods/jxm7 574\nI0826 16:07:53.499163       1 logs_generator.go:76] 55 GET /api/v1/namespaces/kube-system/pods/2sf 222\nI0826 16:07:53.699099       1 logs_generator.go:76] 56 GET /api/v1/namespaces/ns/pods/t9s 268\nI0826 16:07:53.899073       1 logs_generator.go:76] 57 PUT /api/v1/namespaces/kube-system/pods/2pv 446\nI0826 16:07:54.099079       1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/n6z 391\nI0826 16:07:54.299107       1 logs_generator.go:76] 59 POST /api/v1/namespaces/default/pods/569r 568\nI0826 16:07:54.499085       1 logs_generator.go:76] 60 PUT /api/v1/namespaces/ns/pods/28r 435\nI0826 16:07:54.699087       1 logs_generator.go:76] 61 POST /api/v1/namespaces/default/pods/zgz 490\nI0826 16:07:54.899089       1 logs_generator.go:76] 62 POST /api/v1/namespaces/kube-system/pods/wt2 569\nI0826 16:07:55.099107       1 logs_generator.go:76] 63 POST /api/v1/namespaces/kube-system/pods/gv9s 437\nI0826 16:07:55.299099       1 logs_generator.go:76] 64 POST /api/v1/namespaces/default/pods/58b 370\nI0826 16:07:55.499097       1 logs_generator.go:76] 65 GET /api/v1/namespaces/ns/pods/mns 387\nI0826 16:07:55.699098       1 logs_generator.go:76] 66 GET /api/v1/namespaces/kube-system/pods/jrb 479\nI0826 16:07:55.899060       1 logs_generator.go:76] 67 GET /api/v1/namespaces/kube-system/pods/v6s6 530\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 26 16:07:56.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2240'
Aug 26 16:08:00.302: INFO: stderr: ""
Aug 26 16:08:00.302: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:08:00.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2240" for this suite.

• [SLOW TEST:30.923 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":261,"skipped":4186,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:08:00.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 16:08:00.437: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a" in namespace "security-context-test-5989" to be "success or failure"
Aug 26 16:08:00.459: INFO: Pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.044386ms
Aug 26 16:08:02.662: INFO: Pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22533546s
Aug 26 16:08:04.670: INFO: Pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232599422s
Aug 26 16:08:06.676: INFO: Pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239285489s
Aug 26 16:08:06.677: INFO: Pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a" satisfied condition "success or failure"
Aug 26 16:08:06.686: INFO: Got logs for pod "busybox-privileged-false-9ece158f-3860-45d2-9f9b-77dc8559252a": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:08:06.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5989" for this suite.

• [SLOW TEST:6.390 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4189,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:08:06.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 16:08:06.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:08:11.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9214" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4200,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:08:11.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 16:08:17.811: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2b4e2baf-5eff-4baf-99b1-f587d82ea2e5"
Aug 26 16:08:17.811: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2b4e2baf-5eff-4baf-99b1-f587d82ea2e5" in namespace "pods-8205" to be "terminated due to deadline exceeded"
Aug 26 16:08:17.893: INFO: Pod "pod-update-activedeadlineseconds-2b4e2baf-5eff-4baf-99b1-f587d82ea2e5": Phase="Running", Reason="", readiness=true. Elapsed: 81.285416ms
Aug 26 16:08:21.625: INFO: Pod "pod-update-activedeadlineseconds-2b4e2baf-5eff-4baf-99b1-f587d82ea2e5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 3.813046132s
Aug 26 16:08:21.625: INFO: Pod "pod-update-activedeadlineseconds-2b4e2baf-5eff-4baf-99b1-f587d82ea2e5" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:08:21.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8205" for this suite.

• [SLOW TEST:11.259 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:08:22.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:08:38.663: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:08:40.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:08:42.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:08:44.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734054918, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:08:47.930: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:08:50.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2849" for this suite.
STEP: Destroying namespace "webhook-2849-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:33.173 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":265,"skipped":4328,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:08:55.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 26 16:08:57.598: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 16:09:16.229: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:10:22.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2508" for this suite.

• [SLOW TEST:87.236 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":266,"skipped":4339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:10:22.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 26 16:10:22.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:12:17.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7433" for this suite.

• [SLOW TEST:115.175 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":267,"skipped":4361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:12:17.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-53
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-53
Aug 26 16:12:18.539: INFO: Found 0 stateful pods, waiting for 1
Aug 26 16:12:28.547: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 16:12:28.590: INFO: Deleting all statefulset in ns statefulset-53
Aug 26 16:12:28.678: INFO: Scaling statefulset ss to 0
Aug 26 16:12:38.947: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:12:38.952: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:12:39.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-53" for this suite.

• [SLOW TEST:21.401 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":268,"skipped":4396,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:12:39.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:12:57.989: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:13:00.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055179, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055175, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:13:02.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055179, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055175, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:13:04.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055179, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055175, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:13:06.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055179, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055175, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:13:08.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055177, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055179, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055175, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:13:13.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 26 16:13:13.774: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:13:14.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7586" for this suite.
STEP: Destroying namespace "webhook-7586-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:36.711 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":269,"skipped":4406,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:13:16.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 26 16:13:16.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3681'
Aug 26 16:13:18.736: INFO: stderr: ""
Aug 26 16:13:18.736: INFO: stdout: "pod/pause created\n"
Aug 26 16:13:18.736: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 26 16:13:18.736: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3681" to be "running and ready"
Aug 26 16:13:18.769: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.747055ms
Aug 26 16:13:20.794: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057358863s
Aug 26 16:13:23.004: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267212184s
Aug 26 16:13:25.010: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.27389524s
Aug 26 16:13:25.011: INFO: Pod "pause" satisfied condition "running and ready"
Aug 26 16:13:25.011: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 26 16:13:25.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3681'
Aug 26 16:13:26.169: INFO: stderr: ""
Aug 26 16:13:26.169: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 26 16:13:26.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3681'
Aug 26 16:13:27.365: INFO: stderr: ""
Aug 26 16:13:27.365: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 26 16:13:27.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3681'
Aug 26 16:13:28.513: INFO: stderr: ""
Aug 26 16:13:28.513: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 26 16:13:28.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3681'
Aug 26 16:13:29.640: INFO: stderr: ""
Aug 26 16:13:29.640: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 26 16:13:29.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3681'
Aug 26 16:13:30.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:13:30.791: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 26 16:13:30.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3681'
Aug 26 16:13:31.961: INFO: stderr: "No resources found in kubectl-3681 namespace.\n"
Aug 26 16:13:31.962: INFO: stdout: ""
Aug 26 16:13:31.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3681 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 16:13:33.115: INFO: stderr: ""
Aug 26 16:13:33.115: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:13:33.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3681" for this suite.

• [SLOW TEST:17.068 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":270,"skipped":4413,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:13:33.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:13:33.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b" in namespace "projected-5777" to be "success or failure"
Aug 26 16:13:33.535: INFO: Pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.076484ms
Aug 26 16:13:35.701: INFO: Pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220548939s
Aug 26 16:13:37.708: INFO: Pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b": Phase="Running", Reason="", readiness=true. Elapsed: 4.227378789s
Aug 26 16:13:39.715: INFO: Pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.234536809s
STEP: Saw pod success
Aug 26 16:13:39.716: INFO: Pod "downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b" satisfied condition "success or failure"
Aug 26 16:13:39.721: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b container client-container: 
STEP: delete the pod
Aug 26 16:13:39.854: INFO: Waiting for pod downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b to disappear
Aug 26 16:13:39.879: INFO: Pod downwardapi-volume-065902d8-c60e-406a-bf49-629c9448783b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:13:39.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5777" for this suite.

• [SLOW TEST:6.765 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4417,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:13:39.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-0b951b8b-1366-4bc1-b0b3-9bbd670098c9
STEP: Creating a pod to test consume configMaps
Aug 26 16:13:42.084: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33" in namespace "projected-6057" to be "success or failure"
Aug 26 16:13:42.540: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Pending", Reason="", readiness=false. Elapsed: 456.615801ms
Aug 26 16:13:44.546: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462087372s
Aug 26 16:13:46.667: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.583268242s
Aug 26 16:13:48.787: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702837824s
Aug 26 16:13:51.038: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Running", Reason="", readiness=true. Elapsed: 8.954189038s
Aug 26 16:13:53.048: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.964333039s
STEP: Saw pod success
Aug 26 16:13:53.049: INFO: Pod "pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33" satisfied condition "success or failure"
Aug 26 16:13:53.054: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 16:13:53.402: INFO: Waiting for pod pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33 to disappear
Aug 26 16:13:53.608: INFO: Pod pod-projected-configmaps-14af6214-14ba-4257-a0c6-48dc32d76e33 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:13:53.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6057" for this suite.

• [SLOW TEST:14.784 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:13:54.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Aug 26 16:13:56.083: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix695475041/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:13:56.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2594" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":273,"skipped":4458,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:13:56.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-42sd
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 16:14:00.038: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-42sd" in namespace "subpath-8300" to be "success or failure"
Aug 26 16:14:00.366: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Pending", Reason="", readiness=false. Elapsed: 327.810576ms
Aug 26 16:14:02.893: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.854455717s
Aug 26 16:14:05.204: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.16540595s
Aug 26 16:14:07.276: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.236993653s
Aug 26 16:14:09.604: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.565191997s
Aug 26 16:14:11.762: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 11.723288466s
Aug 26 16:14:13.768: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 13.729235534s
Aug 26 16:14:15.773: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 15.734540196s
Aug 26 16:14:17.780: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 17.74151182s
Aug 26 16:14:21.052: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 21.013036302s
Aug 26 16:14:23.057: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 23.018154299s
Aug 26 16:14:25.063: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 25.024633964s
Aug 26 16:14:27.070: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Running", Reason="", readiness=true. Elapsed: 27.03175381s
Aug 26 16:14:29.078: INFO: Pod "pod-subpath-test-projected-42sd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.039560913s
STEP: Saw pod success
Aug 26 16:14:29.079: INFO: Pod "pod-subpath-test-projected-42sd" satisfied condition "success or failure"
Aug 26 16:14:29.139: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-42sd container test-container-subpath-projected-42sd: 
STEP: delete the pod
Aug 26 16:14:30.114: INFO: Waiting for pod pod-subpath-test-projected-42sd to disappear
Aug 26 16:14:30.125: INFO: Pod pod-subpath-test-projected-42sd no longer exists
STEP: Deleting pod pod-subpath-test-projected-42sd
Aug 26 16:14:30.126: INFO: Deleting pod "pod-subpath-test-projected-42sd" in namespace "subpath-8300"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:14:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8300" for this suite.

• [SLOW TEST:33.159 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":274,"skipped":4465,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:14:30.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 16:14:30.649: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:14:44.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3145" for this suite.

• [SLOW TEST:14.113 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":275,"skipped":4479,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:14:44.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:15:06.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:15:08.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055305, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:15:11.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055305, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:15:12.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055305, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:15:15.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055306, loc:(*time.Location)(0x610c660)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055305, loc:(*time.Location)(0x610c660)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:15:19.673: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:15:30.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2653" for this suite.
STEP: Destroying namespace "webhook-2653-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:46.924 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":276,"skipped":4484,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:15:31.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5232
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 16:15:31.507: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 16:16:05.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.237:8080/dial?request=hostname&protocol=http&host=10.244.2.236&port=8080&tries=1'] Namespace:pod-network-test-5232 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:16:05.030: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:16:05.135572       7 log.go:172] (0x962d1f0) (0x962d260) Create stream
I0826 16:16:05.135721       7 log.go:172] (0x962d1f0) (0x962d260) Stream added, broadcasting: 1
I0826 16:16:05.138434       7 log.go:172] (0x962d1f0) Reply frame received for 1
I0826 16:16:05.138544       7 log.go:172] (0x962d1f0) (0x8f98d90) Create stream
I0826 16:16:05.138597       7 log.go:172] (0x962d1f0) (0x8f98d90) Stream added, broadcasting: 3
I0826 16:16:05.139588       7 log.go:172] (0x962d1f0) Reply frame received for 3
I0826 16:16:05.139693       7 log.go:172] (0x962d1f0) (0x842f340) Create stream
I0826 16:16:05.139746       7 log.go:172] (0x962d1f0) (0x842f340) Stream added, broadcasting: 5
I0826 16:16:05.140672       7 log.go:172] (0x962d1f0) Reply frame received for 5
I0826 16:16:05.233724       7 log.go:172] (0x962d1f0) Data frame received for 3
I0826 16:16:05.233879       7 log.go:172] (0x8f98d90) (3) Data frame handling
I0826 16:16:05.234018       7 log.go:172] (0x8f98d90) (3) Data frame sent
I0826 16:16:05.234139       7 log.go:172] (0x962d1f0) Data frame received for 3
I0826 16:16:05.234240       7 log.go:172] (0x8f98d90) (3) Data frame handling
I0826 16:16:05.234797       7 log.go:172] (0x962d1f0) Data frame received for 5
I0826 16:16:05.234935       7 log.go:172] (0x842f340) (5) Data frame handling
I0826 16:16:05.236206       7 log.go:172] (0x962d1f0) Data frame received for 1
I0826 16:16:05.236344       7 log.go:172] (0x962d260) (1) Data frame handling
I0826 16:16:05.236520       7 log.go:172] (0x962d260) (1) Data frame sent
I0826 16:16:05.236674       7 log.go:172] (0x962d1f0) (0x962d260) Stream removed, broadcasting: 1
I0826 16:16:05.236929       7 log.go:172] (0x962d1f0) Go away received
I0826 16:16:05.237527       7 log.go:172] (0x962d1f0) (0x962d260) Stream removed, broadcasting: 1
I0826 16:16:05.237685       7 log.go:172] (0x962d1f0) (0x8f98d90) Stream removed, broadcasting: 3
I0826 16:16:05.237819       7 log.go:172] (0x962d1f0) (0x842f340) Stream removed, broadcasting: 5
Aug 26 16:16:05.238: INFO: Waiting for responses: map[]
Aug 26 16:16:05.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.237:8080/dial?request=hostname&protocol=http&host=10.244.1.46&port=8080&tries=1'] Namespace:pod-network-test-5232 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:16:05.243: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:16:05.346933       7 log.go:172] (0x9abcf50) (0x9abd030) Create stream
I0826 16:16:05.347204       7 log.go:172] (0x9abcf50) (0x9abd030) Stream added, broadcasting: 1
I0826 16:16:05.352145       7 log.go:172] (0x9abcf50) Reply frame received for 1
I0826 16:16:05.352354       7 log.go:172] (0x9abcf50) (0x9abd3b0) Create stream
I0826 16:16:05.352472       7 log.go:172] (0x9abcf50) (0x9abd3b0) Stream added, broadcasting: 3
I0826 16:16:05.354293       7 log.go:172] (0x9abcf50) Reply frame received for 3
I0826 16:16:05.354439       7 log.go:172] (0x9abcf50) (0x8988a10) Create stream
I0826 16:16:05.354529       7 log.go:172] (0x9abcf50) (0x8988a10) Stream added, broadcasting: 5
I0826 16:16:05.356194       7 log.go:172] (0x9abcf50) Reply frame received for 5
I0826 16:16:05.430563       7 log.go:172] (0x9abcf50) Data frame received for 3
I0826 16:16:05.430844       7 log.go:172] (0x9abd3b0) (3) Data frame handling
I0826 16:16:05.431073       7 log.go:172] (0x9abd3b0) (3) Data frame sent
I0826 16:16:05.431259       7 log.go:172] (0x9abcf50) Data frame received for 3
I0826 16:16:05.431450       7 log.go:172] (0x9abd3b0) (3) Data frame handling
I0826 16:16:05.431650       7 log.go:172] (0x9abcf50) Data frame received for 5
I0826 16:16:05.431866       7 log.go:172] (0x8988a10) (5) Data frame handling
I0826 16:16:05.432582       7 log.go:172] (0x9abcf50) Data frame received for 1
I0826 16:16:05.432682       7 log.go:172] (0x9abd030) (1) Data frame handling
I0826 16:16:05.432945       7 log.go:172] (0x9abd030) (1) Data frame sent
I0826 16:16:05.433059       7 log.go:172] (0x9abcf50) (0x9abd030) Stream removed, broadcasting: 1
I0826 16:16:05.433194       7 log.go:172] (0x9abcf50) Go away received
I0826 16:16:05.433581       7 log.go:172] (0x9abcf50) (0x9abd030) Stream removed, broadcasting: 1
I0826 16:16:05.433725       7 log.go:172] (0x9abcf50) (0x9abd3b0) Stream removed, broadcasting: 3
I0826 16:16:05.433845       7 log.go:172] (0x9abcf50) (0x8988a10) Stream removed, broadcasting: 5
Aug 26 16:16:05.434: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:16:05.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5232" for this suite.

• [SLOW TEST:34.265 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 16:16:05.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 26 16:16:07.968: INFO: Waiting up to 5m0s for pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c" in namespace "containers-4178" to be "success or failure"
Aug 26 16:16:08.111: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.0445ms
Aug 26 16:16:10.710: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741508023s
Aug 26 16:16:12.760: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791652672s
Aug 26 16:16:14.830: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c": Phase="Running", Reason="", readiness=true. Elapsed: 6.861443753s
Aug 26 16:16:16.943: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.974795408s
STEP: Saw pod success
Aug 26 16:16:16.943: INFO: Pod "client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c" satisfied condition "success or failure"
Aug 26 16:16:17.036: INFO: Trying to get logs from node jerma-worker2 pod client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c container test-container: 
STEP: delete the pod
Aug 26 16:16:17.200: INFO: Waiting for pod client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c to disappear
Aug 26 16:16:17.794: INFO: Pod client-containers-f11da091-d164-45d9-9bc9-38e593db3b0c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 16:16:17.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4178" for this suite.

• [SLOW TEST:12.654 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 26 16:16:18.110: INFO: Running AfterSuite actions on all nodes
Aug 26 16:16:18.111: INFO: Running AfterSuite actions on node 1
Aug 26 16:16:18.111: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 7945.700 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS