I0312 21:08:37.369221 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0312 21:08:37.369396 6 e2e.go:109] Starting e2e run "07ba79d4-33f5-4122-9a8c-8ab1a2bd106d" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584047316 - Will randomize all specs Will run 278 of 4814 specs Mar 12 21:08:37.448: INFO: >>> kubeConfig: /root/.kube/config Mar 12 21:08:37.451: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 12 21:08:37.469: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 12 21:08:37.496: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 12 21:08:37.496: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 12 21:08:37.496: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 12 21:08:37.506: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 12 21:08:37.506: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 12 21:08:37.506: INFO: e2e test version: v1.17.0 Mar 12 21:08:37.507: INFO: kube-apiserver version: v1.17.2 Mar 12 21:08:37.507: INFO: >>> kubeConfig: /root/.kube/config Mar 12 21:08:37.510: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:08:37.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Mar 12 21:08:37.564: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-66c4dff0-494d-4c62-8b6f-ae71bdede496 STEP: Creating a pod to test consume secrets Mar 12 21:08:37.572: INFO: Waiting up to 5m0s for pod "pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c" in namespace "secrets-9697" to be "success or failure" Mar 12 21:08:37.622: INFO: Pod "pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c": Phase="Pending", Reason="", readiness=false. Elapsed: 49.989113ms Mar 12 21:08:39.624: INFO: Pod "pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052724858s STEP: Saw pod success Mar 12 21:08:39.624: INFO: Pod "pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c" satisfied condition "success or failure" Mar 12 21:08:39.626: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c container secret-env-test: STEP: delete the pod Mar 12 21:08:39.661: INFO: Waiting for pod pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c to disappear Mar 12 21:08:39.665: INFO: Pod pod-secrets-1b23b80b-74db-4aa7-b75f-38b636d6aa3c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:08:39.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9697" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":26,"failed":0} ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:08:39.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 12 21:08:39.716: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 12 21:08:40.013: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 12 21:08:42.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644120, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644120, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644120, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644120, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 21:08:44.828: INFO: Waited 731.022847ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:08:45.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3270" for this suite. • [SLOW TEST:5.690 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":2,"skipped":26,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:08:45.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:08:46.134: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:08:48.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644126, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644126, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644126, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644126, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:08:51.185: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5504" for this suite. STEP: Destroying namespace "webhook-5504-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.043 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":3,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:01.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 21:09:01.496: INFO: Waiting up to 5m0s for pod "pod-268ceecf-0432-45ac-b0e5-63bb606765ac" in namespace "emptydir-1275" to be "success or failure" Mar 12 21:09:01.498: INFO: Pod "pod-268ceecf-0432-45ac-b0e5-63bb606765ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701114ms Mar 12 21:09:03.501: INFO: Pod "pod-268ceecf-0432-45ac-b0e5-63bb606765ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005811907s STEP: Saw pod success Mar 12 21:09:03.501: INFO: Pod "pod-268ceecf-0432-45ac-b0e5-63bb606765ac" satisfied condition "success or failure" Mar 12 21:09:03.504: INFO: Trying to get logs from node jerma-worker2 pod pod-268ceecf-0432-45ac-b0e5-63bb606765ac container test-container: STEP: delete the pod Mar 12 21:09:03.555: INFO: Waiting for pod pod-268ceecf-0432-45ac-b0e5-63bb606765ac to disappear Mar 12 21:09:03.563: INFO: Pod pod-268ceecf-0432-45ac-b0e5-63bb606765ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:03.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1275" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":56,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:03.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 12 21:09:06.146: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8303 pod-service-account-03e5ecc5-6c37-4ea6-9ae3-73dc154538ef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 12 21:09:08.138: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8303 pod-service-account-03e5ecc5-6c37-4ea6-9ae3-73dc154538ef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 12 21:09:08.303: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8303 pod-service-account-03e5ecc5-6c37-4ea6-9ae3-73dc154538ef -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:08.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8303" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":5,"skipped":61,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:08.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0312 21:09:09.611114 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 21:09:09.611: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:09.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9393" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":6,"skipped":66,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:09.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3992/configmap-test-ba268e9b-fafc-4aab-a5d0-4b580404ebc4 STEP: Creating a pod to test consume configMaps Mar 12 21:09:09.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92" in namespace "configmap-3992" to be "success or failure" Mar 12 21:09:09.697: INFO: Pod "pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121135ms Mar 12 21:09:11.701: INFO: Pod "pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008322497s STEP: Saw pod success Mar 12 21:09:11.701: INFO: Pod "pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92" satisfied condition "success or failure" Mar 12 21:09:11.704: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92 container env-test: STEP: delete the pod Mar 12 21:09:11.723: INFO: Waiting for pod pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92 to disappear Mar 12 21:09:11.727: INFO: Pod pod-configmaps-49c86046-41b1-4ed9-a539-5a6d889a1c92 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3992" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:11.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:09:11.799: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.449429ms) Mar 12 21:09:11.804: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.240603ms) Mar 12 21:09:11.843: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 39.713295ms) Mar 12 21:09:11.860: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 16.342523ms) Mar 12 21:09:11.863: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.936746ms) Mar 12 21:09:11.866: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.81452ms) Mar 12 21:09:11.872: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.615627ms) Mar 12 21:09:11.878: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.468569ms) Mar 12 21:09:11.884: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.049036ms) Mar 12 21:09:11.888: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.709981ms) Mar 12 21:09:11.891: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.114903ms) Mar 12 21:09:11.894: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.808668ms) Mar 12 21:09:11.896: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.625489ms) Mar 12 21:09:11.899: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.535387ms) Mar 12 21:09:11.901: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.546131ms) Mar 12 21:09:11.904: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.516667ms) Mar 12 21:09:11.907: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.675198ms) Mar 12 21:09:11.909: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.844111ms) Mar 12 21:09:11.912: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.590537ms) Mar 12 21:09:11.915: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.651152ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4288" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":8,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:11.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:15.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8333" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":9,"skipped":138,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:15.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:09:15.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9410" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":10,"skipped":150,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:09:15.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5177 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5177 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5177 Mar 12 21:09:15.236: INFO: Found 0 stateful pods, waiting for 1 Mar 12 21:09:25.239: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 12 21:09:25.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:09:25.406: INFO: stderr: "I0312 21:09:25.325332 109 log.go:172] (0xc000105290) (0xc0007b6000) Create stream\nI0312 21:09:25.325374 109 log.go:172] (0xc000105290) (0xc0007b6000) Stream added, broadcasting: 1\nI0312 21:09:25.326996 109 log.go:172] (0xc000105290) Reply frame received for 1\nI0312 21:09:25.327013 109 log.go:172] (0xc000105290) (0xc0007b6140) Create stream\nI0312 21:09:25.327017 109 log.go:172] (0xc000105290) (0xc0007b6140) Stream added, broadcasting: 3\nI0312 21:09:25.327477 109 log.go:172] (0xc000105290) Reply frame received for 3\nI0312 21:09:25.327498 109 log.go:172] (0xc000105290) (0xc000713900) Create stream\nI0312 21:09:25.327505 109 log.go:172] (0xc000105290) (0xc000713900) Stream added, broadcasting: 5\nI0312 21:09:25.327954 109 log.go:172] (0xc000105290) Reply frame received for 5\nI0312 21:09:25.383065 109 log.go:172] (0xc000105290) Data frame received for 5\nI0312 21:09:25.383090 109 log.go:172] (0xc000713900) (5) Data frame handling\nI0312 21:09:25.383103 109 log.go:172] (0xc000713900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:09:25.403063 109 log.go:172] (0xc000105290) Data frame received for 5\nI0312 21:09:25.403085 109 log.go:172] (0xc000713900) (5) Data frame handling\nI0312 21:09:25.403098 109 log.go:172] (0xc000105290) Data frame received for 3\nI0312 21:09:25.403104 109 log.go:172] (0xc0007b6140) (3) Data frame handling\nI0312 21:09:25.403112 109 log.go:172] (0xc0007b6140) (3) Data frame sent\nI0312 21:09:25.403120 109 log.go:172] (0xc000105290) Data frame received for 3\nI0312 21:09:25.403127 109 log.go:172] (0xc0007b6140) (3) Data frame handling\nI0312 21:09:25.404209 109 log.go:172] (0xc000105290) Data frame received for 1\nI0312 21:09:25.404223 109 log.go:172] (0xc0007b6000) (1) Data frame handling\nI0312 21:09:25.404231 109 log.go:172] (0xc0007b6000) (1) Data frame sent\nI0312 21:09:25.404243 109 log.go:172] (0xc000105290) (0xc0007b6000) Stream removed, broadcasting: 1\nI0312 21:09:25.404251 109 log.go:172] (0xc000105290) Go away received\nI0312 21:09:25.404465 109 log.go:172] (0xc000105290) (0xc0007b6000) Stream removed, broadcasting: 1\nI0312 21:09:25.404475 109 log.go:172] (0xc000105290) (0xc0007b6140) Stream removed, broadcasting: 3\nI0312 21:09:25.404480 109 log.go:172] (0xc000105290) (0xc000713900) Stream removed, broadcasting: 5\n" Mar 12 21:09:25.406: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:09:25.406: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:09:25.409: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 21:09:35.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:09:35.413: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:09:35.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999463s Mar 12 21:09:36.433: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993129396s Mar 12 21:09:37.437: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98866879s Mar 12 21:09:38.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984321692s Mar 12 21:09:39.445: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980371745s Mar 12 21:09:40.449: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976316588s Mar 12 21:09:41.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972646627s Mar 12 21:09:42.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969966787s Mar 12 21:09:43.460: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.965779316s Mar 12 21:09:44.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 962.122422ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5177 Mar 12 21:09:45.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:09:45.670: INFO: stderr: "I0312 21:09:45.599001 129 log.go:172] (0xc0000f4d10) (0xc0006b5f40) Create stream\nI0312 21:09:45.599050 129 log.go:172] (0xc0000f4d10) (0xc0006b5f40) Stream added, broadcasting: 1\nI0312 21:09:45.601002 129 log.go:172] (0xc0000f4d10) Reply frame received for 1\nI0312 21:09:45.601023 129 log.go:172] (0xc0000f4d10) (0xc000662780) Create stream\nI0312 21:09:45.601030 129 log.go:172] (0xc0000f4d10) (0xc000662780) Stream added, broadcasting: 3\nI0312 21:09:45.601763 129 log.go:172] (0xc0000f4d10) Reply frame received for 3\nI0312 21:09:45.601803 129 log.go:172] (0xc0000f4d10) (0xc000733540) Create stream\nI0312 21:09:45.601813 129 log.go:172] (0xc0000f4d10) (0xc000733540) Stream added, broadcasting: 5\nI0312 21:09:45.602866 129 log.go:172] (0xc0000f4d10) Reply frame received for 5\nI0312 21:09:45.665201 129 log.go:172] (0xc0000f4d10) Data frame received for 3\nI0312 21:09:45.665245 129 log.go:172] (0xc000662780) (3) Data frame handling\nI0312 21:09:45.665275 129 log.go:172] (0xc000662780) (3) Data frame sent\nI0312 21:09:45.665292 129 log.go:172] (0xc0000f4d10) Data frame received for 3\nI0312 21:09:45.665306 129 log.go:172] (0xc000662780) (3) Data frame handling\nI0312 21:09:45.665514 129 log.go:172] (0xc0000f4d10) Data frame received for 5\nI0312 21:09:45.665536 129 log.go:172] (0xc000733540) (5) Data frame handling\nI0312 21:09:45.665556 129 log.go:172] (0xc000733540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 21:09:45.665571 129 log.go:172] (0xc0000f4d10) Data frame received for 5\nI0312 21:09:45.665597 129 log.go:172] (0xc000733540) (5) Data frame handling\nI0312 21:09:45.666819 129 log.go:172] (0xc0000f4d10) Data frame received for 1\nI0312 21:09:45.666846 129 log.go:172] (0xc0006b5f40) (1) Data frame handling\nI0312 21:09:45.666862 129 log.go:172] (0xc0006b5f40) (1) Data frame sent\nI0312 21:09:45.666889 129 log.go:172] (0xc0000f4d10) (0xc0006b5f40) Stream removed, broadcasting: 1\nI0312 21:09:45.667224 129 log.go:172] (0xc0000f4d10) (0xc0006b5f40) Stream removed, broadcasting: 1\nI0312 21:09:45.667242 129 log.go:172] (0xc0000f4d10) (0xc000662780) Stream removed, broadcasting: 3\nI0312 21:09:45.667251 129 log.go:172] (0xc0000f4d10) (0xc000733540) Stream removed, broadcasting: 5\n" Mar 12 21:09:45.670: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:09:45.670: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:09:45.673: INFO: Found 1 stateful pods, waiting for 3 Mar 12 21:09:55.678: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 21:09:55.678: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 21:09:55.678: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 12 21:09:55.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:09:55.877: INFO: stderr: "I0312 21:09:55.807026 150 log.go:172] (0xc000450840) (0xc00089a000) Create stream\nI0312 21:09:55.807074 150 log.go:172] (0xc000450840) (0xc00089a000) Stream added, broadcasting: 1\nI0312 21:09:55.809643 150 log.go:172] (0xc000450840) Reply frame received for 1\nI0312 21:09:55.809683 150 log.go:172] (0xc000450840) (0xc0006a3c20) Create stream\nI0312 21:09:55.809699 150 log.go:172] (0xc000450840) (0xc0006a3c20) Stream added, broadcasting: 3\nI0312 21:09:55.810702 150 log.go:172] (0xc000450840) Reply frame received for 3\nI0312 21:09:55.810732 150 log.go:172] (0xc000450840) (0xc0001a6000) Create stream\nI0312 21:09:55.810743 150 log.go:172] (0xc000450840) (0xc0001a6000) Stream added, broadcasting: 5\nI0312 21:09:55.811587 150 log.go:172] (0xc000450840) Reply frame received for 5\nI0312 21:09:55.872418 150 log.go:172] (0xc000450840) Data frame received for 5\nI0312 21:09:55.872461 150 log.go:172] (0xc000450840) Data frame received for 3\nI0312 21:09:55.872483 150 log.go:172] (0xc0006a3c20) (3) Data frame handling\nI0312 21:09:55.872493 150 log.go:172] (0xc0006a3c20) (3) Data frame sent\nI0312 21:09:55.872499 150 log.go:172] (0xc000450840) Data frame received for 3\nI0312 21:09:55.872505 150 log.go:172] (0xc0006a3c20) (3) Data frame handling\nI0312 21:09:55.872522 150 log.go:172] (0xc0001a6000) (5) Data frame handling\nI0312 21:09:55.872544 150 log.go:172] (0xc0001a6000) (5) Data frame sent\nI0312 21:09:55.872558 150 log.go:172] (0xc000450840) Data frame received for 5\nI0312 21:09:55.872571 150 log.go:172] (0xc0001a6000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:09:55.873696 150 log.go:172] (0xc000450840) Data frame received for 1\nI0312 21:09:55.873744 150 log.go:172] (0xc00089a000) (1) Data frame handling\nI0312 21:09:55.873764 150 log.go:172] (0xc00089a000) (1) Data frame sent\nI0312 21:09:55.873779 150 log.go:172] (0xc000450840) (0xc00089a000) Stream removed, broadcasting: 1\nI0312 21:09:55.873802 150 log.go:172] (0xc000450840) Go away received\nI0312 21:09:55.874218 150 log.go:172] (0xc000450840) (0xc00089a000) Stream removed, broadcasting: 1\nI0312 21:09:55.874246 150 log.go:172] (0xc000450840) (0xc0006a3c20) Stream removed, broadcasting: 3\nI0312 21:09:55.874254 150 log.go:172] (0xc000450840) (0xc0001a6000) Stream removed, broadcasting: 5\n" Mar 12 21:09:55.877: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:09:55.877: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:09:55.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:09:56.088: INFO: stderr: "I0312 21:09:55.980902 173 log.go:172] (0xc0003c0dc0) (0xc0006e3ea0) Create stream\nI0312 21:09:55.980947 173 log.go:172] (0xc0003c0dc0) (0xc0006e3ea0) Stream added, broadcasting: 1\nI0312 21:09:55.982768 173 log.go:172] (0xc0003c0dc0) Reply frame received for 1\nI0312 21:09:55.982796 173 log.go:172] (0xc0003c0dc0) (0xc0006e3f40) Create stream\nI0312 21:09:55.982806 173 log.go:172] (0xc0003c0dc0) (0xc0006e3f40) Stream added, broadcasting: 3\nI0312 21:09:55.983429 173 log.go:172] (0xc0003c0dc0) Reply frame received for 3\nI0312 21:09:55.983451 173 log.go:172] (0xc0003c0dc0) (0xc0006706e0) Create stream\nI0312 21:09:55.983462 173 log.go:172] (0xc0003c0dc0) (0xc0006706e0) Stream added, broadcasting: 5\nI0312 21:09:55.984003 173 log.go:172] (0xc0003c0dc0) Reply frame received for 5\nI0312 21:09:56.060374 173 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0312 21:09:56.060397 173 log.go:172] (0xc0006706e0) (5) Data frame handling\nI0312 21:09:56.060410 173 log.go:172] (0xc0006706e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:09:56.083655 173 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0312 21:09:56.083678 173 log.go:172] (0xc0006e3f40) (3) Data frame handling\nI0312 21:09:56.083691 173 log.go:172] (0xc0006e3f40) (3) Data frame sent\nI0312 21:09:56.083878 173 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0312 21:09:56.083900 173 log.go:172] (0xc0006e3f40) (3) Data frame handling\nI0312 21:09:56.083914 173 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0312 21:09:56.083922 173 log.go:172] (0xc0006706e0) (5) Data frame handling\nI0312 21:09:56.085397 173 log.go:172] (0xc0003c0dc0) Data frame received for 1\nI0312 21:09:56.085413 173 log.go:172] (0xc0006e3ea0) (1) Data frame handling\nI0312 21:09:56.085421 173 log.go:172] (0xc0006e3ea0) (1) Data frame sent\nI0312 21:09:56.085433 173 log.go:172] (0xc0003c0dc0) (0xc0006e3ea0) Stream removed, broadcasting: 1\nI0312 21:09:56.085447 173 log.go:172] (0xc0003c0dc0) Go away received\nI0312 21:09:56.085735 173 log.go:172] (0xc0003c0dc0) (0xc0006e3ea0) Stream removed, broadcasting: 1\nI0312 21:09:56.085755 173 log.go:172] (0xc0003c0dc0) (0xc0006e3f40) Stream removed, broadcasting: 3\nI0312 21:09:56.085762 173 log.go:172] (0xc0003c0dc0) (0xc0006706e0) Stream removed, broadcasting: 5\n" Mar 12 21:09:56.088: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:09:56.088: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:09:56.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:09:56.264: INFO: stderr: "I0312 21:09:56.190916 193 log.go:172] (0xc0001fc160) (0xc00066fcc0) Create stream\nI0312 21:09:56.190967 193 log.go:172] (0xc0001fc160) (0xc00066fcc0) Stream added, broadcasting: 1\nI0312 21:09:56.195649 193 log.go:172] (0xc0001fc160) Reply frame received for 1\nI0312 21:09:56.195692 193 log.go:172] (0xc0001fc160) (0xc000ae4000) Create stream\nI0312 21:09:56.195712 193 log.go:172] (0xc0001fc160) (0xc000ae4000) Stream added, broadcasting: 3\nI0312 21:09:56.197387 193 log.go:172] (0xc0001fc160) Reply frame received for 3\nI0312 21:09:56.197418 193 log.go:172] (0xc0001fc160) (0xc0005f4a00) Create stream\nI0312 21:09:56.197431 193 log.go:172] (0xc0001fc160) (0xc0005f4a00) Stream added, broadcasting: 5\nI0312 21:09:56.198181 193 log.go:172] (0xc0001fc160) Reply frame received for 5\nI0312 21:09:56.243462 193 log.go:172] (0xc0001fc160) Data frame received for 5\nI0312 21:09:56.243483 193 log.go:172] (0xc0005f4a00) (5) Data frame handling\nI0312 21:09:56.243511 193 log.go:172] (0xc0005f4a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:09:56.260599 193 log.go:172] (0xc0001fc160) Data frame received for 5\nI0312 21:09:56.260638 193 log.go:172] (0xc0005f4a00) (5) Data frame handling\nI0312 21:09:56.260654 193 log.go:172] (0xc0001fc160) Data frame received for 3\nI0312 21:09:56.260661 193 log.go:172] (0xc000ae4000) (3) Data frame handling\nI0312 21:09:56.260668 193 log.go:172] (0xc000ae4000) (3) Data frame sent\nI0312 21:09:56.261000 193 log.go:172] (0xc0001fc160) Data frame received for 3\nI0312 21:09:56.261013 193 log.go:172] (0xc000ae4000) (3) Data frame handling\nI0312 21:09:56.262249 193 log.go:172] (0xc0001fc160) Data frame received for 1\nI0312 21:09:56.262301 193 log.go:172] (0xc00066fcc0) (1) Data frame handling\nI0312 21:09:56.262316 193 log.go:172] (0xc00066fcc0) (1) Data frame sent\nI0312 21:09:56.262326 193 log.go:172] (0xc0001fc160) (0xc00066fcc0) Stream removed, broadcasting: 1\nI0312 21:09:56.262338 193 log.go:172] (0xc0001fc160) Go away received\nI0312 21:09:56.262564 193 log.go:172] (0xc0001fc160) (0xc00066fcc0) Stream removed, broadcasting: 1\nI0312 21:09:56.262577 193 log.go:172] (0xc0001fc160) (0xc000ae4000) Stream removed, broadcasting: 3\nI0312 21:09:56.262585 193 log.go:172] (0xc0001fc160) (0xc0005f4a00) Stream removed, broadcasting: 5\n" Mar 12 21:09:56.264: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:09:56.264: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:09:56.264: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:09:56.266: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 12 21:10:06.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:10:06.271: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:10:06.271: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:10:06.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999673s Mar 12 21:10:07.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990423525s Mar 12 21:10:08.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986682583s Mar 12 21:10:09.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983251886s Mar 12 21:10:10.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979359619s Mar 12 21:10:11.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959120339s Mar 12 21:10:12.326: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953902782s Mar 12 21:10:13.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950501217s Mar 12 21:10:14.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946496001s Mar 12 21:10:15.336: INFO: Verifying statefulset ss doesn't scale past 3 for another 943.284864ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5177 Mar 12 21:10:16.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:10:16.583: INFO: stderr: "I0312 21:10:16.432467 213 log.go:172] (0xc000925600) (0xc00098e8c0) Create stream\nI0312 21:10:16.432498 213 log.go:172] (0xc000925600) (0xc00098e8c0) Stream added, broadcasting: 1\nI0312 21:10:16.435023 213 log.go:172] (0xc000925600) Reply frame received for 1\nI0312 21:10:16.435044 213 log.go:172] (0xc000925600) (0xc0006845a0) Create stream\nI0312 21:10:16.435049 213 log.go:172] (0xc000925600) (0xc0006845a0) Stream added, broadcasting: 3\nI0312 21:10:16.435584 213 log.go:172] (0xc000925600) Reply frame received for 3\nI0312 21:10:16.435610 213 log.go:172] (0xc000925600) (0xc000531360) Create stream\nI0312 21:10:16.435618 213 log.go:172] (0xc000925600) (0xc000531360) Stream added, broadcasting: 5\nI0312 21:10:16.436109 213 log.go:172] (0xc000925600) Reply frame received for 5\nI0312 21:10:16.580569 213 log.go:172] (0xc000925600) Data frame received for 5\nI0312 21:10:16.580586 213 log.go:172] (0xc000531360) (5) Data frame handling\nI0312 21:10:16.580592 213 log.go:172] (0xc000531360) (5) Data frame sent\nI0312 21:10:16.580596 213 log.go:172] (0xc000925600) Data frame received for 5\nI0312 21:10:16.580599 213 log.go:172] (0xc000531360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 21:10:16.580611 213 log.go:172] (0xc000925600) Data frame received for 3\nI0312 21:10:16.580623 213 log.go:172] (0xc0006845a0) (3) Data frame handling\nI0312 21:10:16.580632 213 log.go:172] (0xc0006845a0) (3) Data frame sent\nI0312 21:10:16.580644 213 log.go:172] (0xc000925600) Data frame received for 3\nI0312 21:10:16.580650 213 log.go:172] (0xc0006845a0) (3) Data frame handling\nI0312 21:10:16.581329 213 log.go:172] (0xc000925600) Data frame received for 1\nI0312 21:10:16.581343 213 log.go:172] (0xc00098e8c0) (1) Data frame handling\nI0312 21:10:16.581350 213 log.go:172] (0xc00098e8c0) (1) Data frame sent\nI0312 21:10:16.581357 213 log.go:172] (0xc000925600) (0xc00098e8c0) Stream removed, broadcasting: 1\nI0312 21:10:16.581533 213 log.go:172] (0xc000925600) (0xc00098e8c0) Stream removed, broadcasting: 1\nI0312 21:10:16.581543 213 log.go:172] (0xc000925600) (0xc0006845a0) Stream removed, broadcasting: 3\nI0312 21:10:16.581550 213 log.go:172] (0xc000925600) (0xc000531360) Stream removed, broadcasting: 5\n" Mar 12 21:10:16.583: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:10:16.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:10:16.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:10:16.761: INFO: stderr: "I0312 21:10:16.674093 230 log.go:172] (0xc000946790) (0xc00092e000) Create stream\nI0312 21:10:16.674184 230 log.go:172] (0xc000946790) (0xc00092e000) Stream added, broadcasting: 1\nI0312 21:10:16.675507 230 log.go:172] (0xc000946790) Reply frame received for 1\nI0312 21:10:16.675538 230 log.go:172] (0xc000946790) (0xc00092e0a0) Create stream\nI0312 21:10:16.675544 230 log.go:172] (0xc000946790) (0xc00092e0a0) Stream added, broadcasting: 3\nI0312 21:10:16.676095 230 log.go:172] (0xc000946790) Reply frame received for 3\nI0312 21:10:16.676114 230 log.go:172] (0xc000946790) (0xc0006559a0) Create stream\nI0312 21:10:16.676122 230 log.go:172] (0xc000946790) (0xc0006559a0) Stream added, broadcasting: 5\nI0312 21:10:16.676624 230 log.go:172] (0xc000946790) Reply frame received for 5\nI0312 21:10:16.758427 230 log.go:172] (0xc000946790) Data frame received for 5\nI0312 21:10:16.758450 230 log.go:172] (0xc0006559a0) (5) Data frame handling\nI0312 21:10:16.758457 230 log.go:172] (0xc0006559a0) (5) Data frame sent\nI0312 21:10:16.758462 230 log.go:172] (0xc000946790) Data frame received for 5\nI0312 21:10:16.758465 230 log.go:172] (0xc0006559a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 21:10:16.758478 230 log.go:172] (0xc000946790) Data frame received for 3\nI0312 21:10:16.758484 230 log.go:172] (0xc00092e0a0) (3) Data frame handling\nI0312 21:10:16.758492 230 log.go:172] (0xc00092e0a0) (3) Data frame sent\nI0312 21:10:16.758504 230 log.go:172] (0xc000946790) Data frame received for 3\nI0312 21:10:16.758509 230 log.go:172] (0xc00092e0a0) (3) Data frame handling\nI0312 21:10:16.759310 230 log.go:172] (0xc000946790) Data frame received for 1\nI0312 21:10:16.759324 230 log.go:172] (0xc00092e000) (1) Data frame handling\nI0312 21:10:16.759331 230 log.go:172] (0xc00092e000) (1) Data frame sent\nI0312 21:10:16.759341 230 log.go:172] (0xc000946790) (0xc00092e000) Stream removed, broadcasting: 1\nI0312 21:10:16.759352 230 log.go:172] (0xc000946790) Go away received\nI0312 21:10:16.759566 230 log.go:172] (0xc000946790) (0xc00092e000) Stream removed, broadcasting: 1\nI0312 21:10:16.759578 230 log.go:172] (0xc000946790) (0xc00092e0a0) Stream removed, broadcasting: 3\nI0312 21:10:16.759583 230 log.go:172] (0xc000946790) (0xc0006559a0) Stream removed, broadcasting: 5\n" Mar 12 21:10:16.761: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:10:16.761: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:10:16.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5177 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:10:16.919: INFO: stderr: "I0312 21:10:16.851395 249 log.go:172] (0xc0005b8a50) (0xc0006a5ea0) Create stream\nI0312 21:10:16.851427 249 log.go:172] (0xc0005b8a50) (0xc0006a5ea0) Stream added, broadcasting: 1\nI0312 21:10:16.852983 249 log.go:172] (0xc0005b8a50) Reply frame received for 1\nI0312 21:10:16.853003 249 log.go:172] (0xc0005b8a50) (0xc000745540) Create stream\nI0312 21:10:16.853009 249 log.go:172] (0xc0005b8a50) (0xc000745540) Stream added, broadcasting: 3\nI0312 21:10:16.853551 249 log.go:172] (0xc0005b8a50) Reply frame received for 3\nI0312 21:10:16.853570 249 log.go:172] (0xc0005b8a50) (0xc0006a5f40) Create stream\nI0312 21:10:16.853576 249 log.go:172] (0xc0005b8a50) (0xc0006a5f40) Stream added, broadcasting: 5\nI0312 21:10:16.854053 249 log.go:172] (0xc0005b8a50) Reply frame received for 5\nI0312 21:10:16.915168 249 log.go:172] (0xc0005b8a50) Data frame received for 5\nI0312 21:10:16.915197 249 log.go:172] (0xc0006a5f40) (5) Data frame handling\nI0312 21:10:16.915207 249 log.go:172] (0xc0006a5f40) (5) Data frame sent\nI0312 21:10:16.915213 249 log.go:172] (0xc0005b8a50) Data frame received for 5\nI0312 21:10:16.915218 249 log.go:172] (0xc0006a5f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 21:10:16.915227 249 log.go:172] (0xc0005b8a50) Data frame received for 3\nI0312 21:10:16.915266 249 log.go:172] (0xc000745540) (3) Data frame handling\nI0312 21:10:16.915281 249 log.go:172] (0xc000745540) (3) Data frame sent\nI0312 21:10:16.915289 249 log.go:172] (0xc0005b8a50) Data frame received for 3\nI0312 21:10:16.915300 249 log.go:172] (0xc000745540) (3) Data frame handling\nI0312 21:10:16.916351 249 log.go:172] (0xc0005b8a50) Data frame received for 1\nI0312 21:10:16.916364 249 log.go:172] (0xc0006a5ea0) (1) Data frame handling\nI0312 21:10:16.916375 249 log.go:172] (0xc0006a5ea0) (1) Data frame sent\nI0312 21:10:16.916384 249 log.go:172] (0xc0005b8a50) (0xc0006a5ea0) Stream removed, broadcasting: 1\nI0312 21:10:16.916398 249 log.go:172] (0xc0005b8a50) Go away received\nI0312 21:10:16.916665 249 log.go:172] (0xc0005b8a50) (0xc0006a5ea0) Stream removed, broadcasting: 1\nI0312 21:10:16.916675 249 log.go:172] (0xc0005b8a50) (0xc000745540) Stream removed, broadcasting: 3\nI0312 21:10:16.916680 249 log.go:172] (0xc0005b8a50) (0xc0006a5f40) Stream removed, broadcasting: 5\n" Mar 12 21:10:16.919: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:10:16.919: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:10:16.919: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 21:10:26.932: INFO: Deleting all statefulset in ns statefulset-5177 Mar 12 21:10:26.934: INFO: Scaling statefulset ss to 0 Mar 12 21:10:26.943: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:10:26.945: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:10:26.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5177" for this suite. • [SLOW TEST:71.795 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":11,"skipped":156,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:10:26.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:10:27.054: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 21:10:29.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6628 create -f -' Mar 12 21:10:31.944: INFO: stderr: "" Mar 12 21:10:31.945: INFO: stdout: "e2e-test-crd-publish-openapi-1313-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 21:10:31.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6628 delete e2e-test-crd-publish-openapi-1313-crds test-cr' Mar 12 21:10:32.055: INFO: stderr: "" Mar 12 21:10:32.055: INFO: stdout: "e2e-test-crd-publish-openapi-1313-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 12 21:10:32.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6628 apply -f -' Mar 12 21:10:32.290: INFO: stderr: "" Mar 12 21:10:32.290: INFO: stdout: "e2e-test-crd-publish-openapi-1313-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 21:10:32.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6628 delete e2e-test-crd-publish-openapi-1313-crds test-cr' Mar 12 21:10:32.382: INFO: stderr: "" Mar 12 21:10:32.382: INFO: stdout: "e2e-test-crd-publish-openapi-1313-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 12 21:10:32.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1313-crds' Mar 12 21:10:32.556: INFO: stderr: "" Mar 12 21:10:32.557: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1313-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:10:35.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6628" for this suite. • [SLOW TEST:8.340 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":12,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:10:35.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5074 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5074;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5074 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5074;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5074.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5074.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5074.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5074.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5074.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5074.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.236.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.236.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.236.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.236.59_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5074 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5074;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5074 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5074;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5074.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5074.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5074.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5074.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5074.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5074.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5074.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5074.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.236.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.236.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.236.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.236.59_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:10:39.454: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.456: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.459: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.461: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.467: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.470: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.484: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.487: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.492: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.494: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.495: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.498: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:39.511: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:10:44.515: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.519: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.521: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.554: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.556: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.558: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.561: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.563: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.568: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.570: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:44.583: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:10:49.515: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.519: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.521: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.532: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.552: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.555: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.557: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.559: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.561: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.563: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.565: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.567: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:49.578: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:10:54.527: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.531: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.534: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.537: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.546: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.548: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.568: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.570: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.573: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.578: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.581: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.583: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:54.602: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:10:59.515: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.518: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.520: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.528: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.532: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.547: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.549: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.552: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.556: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.559: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.561: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:10:59.579: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:11:04.515: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.517: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.523: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.532: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.580: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.582: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.585: INFO: Unable to read jessie_udp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074 from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.591: INFO: Unable to read jessie_udp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc from pod dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812: the server could not find the requested resource (get pods dns-test-d6275edd-906a-4950-8032-127f08cb5812) Mar 12 21:11:04.615: INFO: Lookups using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5074 wheezy_tcp@dns-test-service.dns-5074 wheezy_udp@dns-test-service.dns-5074.svc wheezy_tcp@dns-test-service.dns-5074.svc wheezy_udp@_http._tcp.dns-test-service.dns-5074.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5074.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5074 jessie_tcp@dns-test-service.dns-5074 jessie_udp@dns-test-service.dns-5074.svc jessie_tcp@dns-test-service.dns-5074.svc jessie_udp@_http._tcp.dns-test-service.dns-5074.svc jessie_tcp@_http._tcp.dns-test-service.dns-5074.svc] Mar 12 21:11:09.614: INFO: DNS probes using dns-5074/dns-test-d6275edd-906a-4950-8032-127f08cb5812 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:09.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5074" for this suite. • [SLOW TEST:34.622 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":13,"skipped":178,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:09.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0312 21:11:50.036448 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 21:11:50.036: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:50.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2192" for this suite. • [SLOW TEST:40.100 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":14,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:50.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:52.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2108" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:52.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-66332c79-8eed-451f-9277-71264638313e STEP: Creating a pod to test consume configMaps Mar 12 21:11:52.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3" in namespace "configmap-2874" to be "success or failure" Mar 12 21:11:52.302: INFO: Pod "pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384949ms Mar 12 21:11:54.304: INFO: Pod "pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011234052s STEP: Saw pod success Mar 12 21:11:54.304: INFO: Pod "pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3" satisfied condition "success or failure" Mar 12 21:11:54.306: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3 container configmap-volume-test: STEP: delete the pod Mar 12 21:11:54.334: INFO: Waiting for pod pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3 to disappear Mar 12 21:11:54.365: INFO: Pod pod-configmaps-847254a0-498b-4a93-80b0-c88d0301c6e3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:54.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2874" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":222,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:54.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 12 21:11:54.950: INFO: created pod pod-service-account-defaultsa Mar 12 21:11:54.950: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 12 21:11:54.955: INFO: created pod pod-service-account-mountsa Mar 12 21:11:54.955: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 12 21:11:54.983: INFO: created pod pod-service-account-nomountsa Mar 12 21:11:54.983: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 12 21:11:55.029: INFO: created pod pod-service-account-defaultsa-mountspec Mar 12 21:11:55.029: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 12 21:11:55.033: INFO: created pod pod-service-account-mountsa-mountspec Mar 12 21:11:55.033: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 12 21:11:55.079: INFO: created pod pod-service-account-nomountsa-mountspec Mar 12 21:11:55.079: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 12 21:11:55.105: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 12 21:11:55.105: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 12 21:11:55.121: INFO: created pod pod-service-account-mountsa-nomountspec Mar 12 21:11:55.121: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 12 21:11:55.134: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 12 21:11:55.134: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:55.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2678" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":17,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:55.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 12 21:11:55.857: INFO: Waiting up to 5m0s for pod "client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2" in namespace "containers-4102" to be "success or failure" Mar 12 21:11:55.887: INFO: Pod "client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.896423ms Mar 12 21:11:57.909: INFO: Pod "client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052174951s Mar 12 21:11:59.912: INFO: Pod "client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055543988s STEP: Saw pod success Mar 12 21:11:59.912: INFO: Pod "client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2" satisfied condition "success or failure" Mar 12 21:11:59.914: INFO: Trying to get logs from node jerma-worker2 pod client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2 container test-container: STEP: delete the pod Mar 12 21:11:59.942: INFO: Waiting for pod client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2 to disappear Mar 12 21:11:59.948: INFO: Pod client-containers-2bed5439-cf61-43e6-841f-77b5f16048a2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:11:59.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4102" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:11:59.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 21:12:00.816: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 12 21:12:02.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644320, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644320, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644320, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644320, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:12:05.899: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:12:05.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:07.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-330" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.353 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":19,"skipped":289,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:07.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9258 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 21:12:07.426: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 21:12:25.535: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.108 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9258 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:12:25.535: INFO: >>> kubeConfig: /root/.kube/config I0312 21:12:25.561873 6 log.go:172] (0xc001af8fd0) (0xc001439360) Create stream I0312 21:12:25.561899 6 log.go:172] (0xc001af8fd0) (0xc001439360) Stream added, broadcasting: 1 I0312 21:12:25.563728 6 log.go:172] (0xc001af8fd0) Reply frame received for 1 I0312 21:12:25.563756 6 log.go:172] (0xc001af8fd0) (0xc0014415e0) Create stream I0312 21:12:25.563765 6 log.go:172] (0xc001af8fd0) (0xc0014415e0) Stream added, broadcasting: 3 I0312 21:12:25.564548 6 log.go:172] (0xc001af8fd0) Reply frame received for 3 I0312 21:12:25.564587 6 log.go:172] (0xc001af8fd0) (0xc001441680) Create stream I0312 21:12:25.564601 6 log.go:172] (0xc001af8fd0) (0xc001441680) Stream added, broadcasting: 5 I0312 21:12:25.565304 6 log.go:172] (0xc001af8fd0) Reply frame received for 5 I0312 21:12:26.626575 6 log.go:172] (0xc001af8fd0) Data frame received for 3 I0312 21:12:26.626609 6 log.go:172] (0xc0014415e0) (3) Data frame handling I0312 21:12:26.626628 6 log.go:172] (0xc0014415e0) (3) Data frame sent I0312 21:12:26.626647 6 log.go:172] (0xc001af8fd0) Data frame received for 3 I0312 21:12:26.626661 6 log.go:172] (0xc0014415e0) (3) Data frame handling I0312 21:12:26.626681 6 log.go:172] (0xc001af8fd0) Data frame received for 5 I0312 21:12:26.626696 6 log.go:172] (0xc001441680) (5) Data frame handling I0312 21:12:26.628804 6 log.go:172] (0xc001af8fd0) Data frame received for 1 I0312 21:12:26.628831 6 log.go:172] (0xc001439360) (1) Data frame handling I0312 21:12:26.628857 6 log.go:172] (0xc001439360) (1) Data frame sent I0312 21:12:26.628881 6 log.go:172] (0xc001af8fd0) (0xc001439360) Stream removed, broadcasting: 1 I0312 21:12:26.629140 6 log.go:172] (0xc001af8fd0) Go away received I0312 21:12:26.629347 6 log.go:172] (0xc001af8fd0) (0xc001439360) Stream removed, broadcasting: 1 I0312 21:12:26.629382 6 log.go:172] (0xc001af8fd0) (0xc0014415e0) Stream removed, broadcasting: 3 I0312 21:12:26.629403 6 log.go:172] (0xc001af8fd0) (0xc001441680) Stream removed, broadcasting: 5 Mar 12 21:12:26.629: INFO: Found all expected endpoints: [netserver-0] Mar 12 21:12:26.633: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.99 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9258 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:12:26.633: INFO: >>> kubeConfig: /root/.kube/config I0312 21:12:26.656762 6 log.go:172] (0xc0023200b0) (0xc001374780) Create stream I0312 21:12:26.656779 6 log.go:172] (0xc0023200b0) (0xc001374780) Stream added, broadcasting: 1 I0312 21:12:26.659559 6 log.go:172] (0xc0023200b0) Reply frame received for 1 I0312 21:12:26.659595 6 log.go:172] (0xc0023200b0) (0xc001cc4e60) Create stream I0312 21:12:26.659607 6 log.go:172] (0xc0023200b0) (0xc001cc4e60) Stream added, broadcasting: 3 I0312 21:12:26.660424 6 log.go:172] (0xc0023200b0) Reply frame received for 3 I0312 21:12:26.660442 6 log.go:172] (0xc0023200b0) (0xc001374820) Create stream I0312 21:12:26.660448 6 log.go:172] (0xc0023200b0) (0xc001374820) Stream added, broadcasting: 5 I0312 21:12:26.661127 6 log.go:172] (0xc0023200b0) Reply frame received for 5 I0312 21:12:27.714058 6 log.go:172] (0xc0023200b0) Data frame received for 3 I0312 21:12:27.714095 6 log.go:172] (0xc001cc4e60) (3) Data frame handling I0312 21:12:27.714158 6 log.go:172] (0xc001cc4e60) (3) Data frame sent I0312 21:12:27.714598 6 log.go:172] (0xc0023200b0) Data frame received for 3 I0312 21:12:27.714635 6 log.go:172] (0xc001cc4e60) (3) Data frame handling I0312 21:12:27.714671 6 log.go:172] (0xc0023200b0) Data frame received for 5 I0312 21:12:27.714707 6 log.go:172] (0xc001374820) (5) Data frame handling I0312 21:12:27.716455 6 log.go:172] (0xc0023200b0) Data frame received for 1 I0312 21:12:27.716483 6 log.go:172] (0xc001374780) (1) Data frame handling I0312 21:12:27.716524 6 log.go:172] (0xc001374780) (1) Data frame sent I0312 21:12:27.716541 6 log.go:172] (0xc0023200b0) (0xc001374780) Stream removed, broadcasting: 1 I0312 21:12:27.716564 6 log.go:172] (0xc0023200b0) Go away received I0312 21:12:27.716763 6 log.go:172] (0xc0023200b0) (0xc001374780) Stream removed, broadcasting: 1 I0312 21:12:27.716792 6 log.go:172] (0xc0023200b0) (0xc001cc4e60) Stream removed, broadcasting: 3 I0312 21:12:27.716811 6 log.go:172] (0xc0023200b0) (0xc001374820) Stream removed, broadcasting: 5 Mar 12 21:12:27.716: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:27.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9258" for this suite. • [SLOW TEST:20.415 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":303,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:27.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 21:12:27.790: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:31.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5899" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":21,"skipped":319,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:31.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 21:12:34.355: INFO: Successfully updated pod "pod-update-f24da5a3-fab7-481d-bd1a-ffc146dcbb60" STEP: verifying the updated pod is in kubernetes Mar 12 21:12:34.365: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:34.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7470" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":330,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:34.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 21:12:40.490650 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 21:12:40.490: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3551" for this suite. • [SLOW TEST:6.122 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":23,"skipped":338,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:40.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4639 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4639 STEP: Deleting pre-stop pod Mar 12 21:12:51.663: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:51.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4639" for this suite. • [SLOW TEST:11.190 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":24,"skipped":343,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:51.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 12 21:12:51.741: INFO: Waiting up to 5m0s for pod "var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4" in namespace "var-expansion-6238" to be "success or failure" Mar 12 21:12:51.792: INFO: Pod "var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.133503ms Mar 12 21:12:53.796: INFO: Pod "var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054498076s Mar 12 21:12:55.800: INFO: Pod "var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058201637s STEP: Saw pod success Mar 12 21:12:55.800: INFO: Pod "var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4" satisfied condition "success or failure" Mar 12 21:12:55.803: INFO: Trying to get logs from node jerma-worker pod var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4 container dapi-container: STEP: delete the pod Mar 12 21:12:55.825: INFO: Waiting for pod var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4 to disappear Mar 12 21:12:55.829: INFO: Pod var-expansion-72e2bbf9-4894-434f-bce8-2d3f896d8fd4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:12:55.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6238" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":343,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:12:55.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:12:59.957: INFO: DNS probes using dns-test-06ad07c3-3aed-435b-9dbe-06145a06a55f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:13:04.029: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:04.032: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:04.033: INFO: Lookups using dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] Mar 12 21:13:09.037: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:09.041: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:09.041: INFO: Lookups using dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] Mar 12 21:13:14.038: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:14.041: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:14.041: INFO: Lookups using dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] Mar 12 21:13:19.036: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:19.037: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:19.037: INFO: Lookups using dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] Mar 12 21:13:24.037: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:24.040: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 12 21:13:24.040: INFO: Lookups using dns-6767/dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] Mar 12 21:13:29.040: INFO: DNS probes using dns-test-b1ce92bb-3fbe-4988-8ec6-9fa8d6ff849e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:13:33.203: INFO: DNS probes using dns-test-285a7b33-ced2-4d1d-970f-1c2c3c8e7496 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:13:33.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6767" for this suite. • [SLOW TEST:37.468 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":26,"skipped":351,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:13:33.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2830.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2830.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2830.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:13:37.455: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.457: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.459: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.461: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.468: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.470: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.472: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.474: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:37.478: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:13:42.482: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.484: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.487: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.489: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.495: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.498: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.502: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.504: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:42.510: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:13:47.482: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.485: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.488: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.491: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.498: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.501: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:47.512: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:13:52.492: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.495: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.498: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.500: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.507: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.509: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.511: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.513: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:52.518: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:13:57.498: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.501: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.504: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.507: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.514: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.517: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.519: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.522: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:13:57.527: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:14:02.482: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.486: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.489: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.491: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.498: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.500: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.505: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local from pod dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722: the server could not find the requested resource (get pods dns-test-19666499-c347-48c4-8e5b-2366ba48b722) Mar 12 21:14:02.540: INFO: Lookups using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2830.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2830.svc.cluster.local jessie_udp@dns-test-service-2.dns-2830.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2830.svc.cluster.local] Mar 12 21:14:07.514: INFO: DNS probes using dns-2830/dns-test-19666499-c347-48c4-8e5b-2366ba48b722 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:07.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2830" for this suite. • [SLOW TEST:34.379 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":27,"skipped":357,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:07.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-380.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-380.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-380.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-380.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-380.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-380.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 21:14:11.768: INFO: DNS probes using dns-380/dns-test-f735aa59-699a-47cc-b24c-3b495660b3fb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:11.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-380" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":28,"skipped":362,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:11.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 12 21:14:11.992: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 12 21:14:17.006: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:17.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1807" for this suite. • [SLOW TEST:5.210 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":29,"skipped":378,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:17.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:14:17.240: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-aab172cb-cc12-4015-b0ff-b41bce8f9b7d" in namespace "security-context-test-2013" to be "success or failure" Mar 12 21:14:17.258: INFO: Pod "busybox-privileged-false-aab172cb-cc12-4015-b0ff-b41bce8f9b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.950146ms Mar 12 21:14:19.261: INFO: Pod "busybox-privileged-false-aab172cb-cc12-4015-b0ff-b41bce8f9b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021180599s Mar 12 21:14:19.261: INFO: Pod "busybox-privileged-false-aab172cb-cc12-4015-b0ff-b41bce8f9b7d" satisfied condition "success or failure" Mar 12 21:14:19.275: INFO: Got logs for pod "busybox-privileged-false-aab172cb-cc12-4015-b0ff-b41bce8f9b7d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:19.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2013" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":386,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:19.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 12 21:14:19.337: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:33.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7863" for this suite. • [SLOW TEST:14.594 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":31,"skipped":389,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:33.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b402974c-0dec-43dd-9f73-58a225a90071 STEP: Creating a pod to test consume configMaps Mar 12 21:14:33.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74" in namespace "projected-2209" to be "success or failure" Mar 12 21:14:33.933: INFO: Pod "pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368166ms Mar 12 21:14:35.937: INFO: Pod "pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008174993s Mar 12 21:14:37.941: INFO: Pod "pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012099174s STEP: Saw pod success Mar 12 21:14:37.941: INFO: Pod "pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74" satisfied condition "success or failure" Mar 12 21:14:37.944: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74 container projected-configmap-volume-test: STEP: delete the pod Mar 12 21:14:38.019: INFO: Waiting for pod pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74 to disappear Mar 12 21:14:38.021: INFO: Pod pod-projected-configmaps-54472732-1148-49a1-8379-5d7017a90f74 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:38.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2209" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":402,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 12 21:14:38.101: INFO: Waiting up to 5m0s for pod "pod-54d011b7-1f9d-4e72-83eb-7af539d36349" in namespace "emptydir-5962" to be "success or failure" Mar 12 21:14:38.114: INFO: Pod "pod-54d011b7-1f9d-4e72-83eb-7af539d36349": Phase="Pending", Reason="", readiness=false. Elapsed: 13.011694ms Mar 12 21:14:40.117: INFO: Pod "pod-54d011b7-1f9d-4e72-83eb-7af539d36349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016402216s STEP: Saw pod success Mar 12 21:14:40.117: INFO: Pod "pod-54d011b7-1f9d-4e72-83eb-7af539d36349" satisfied condition "success or failure" Mar 12 21:14:40.120: INFO: Trying to get logs from node jerma-worker pod pod-54d011b7-1f9d-4e72-83eb-7af539d36349 container test-container: STEP: delete the pod Mar 12 21:14:40.152: INFO: Waiting for pod pod-54d011b7-1f9d-4e72-83eb-7af539d36349 to disappear Mar 12 21:14:40.157: INFO: Pod pod-54d011b7-1f9d-4e72-83eb-7af539d36349 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:40.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5962" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:40.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:40.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1459" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":34,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:40.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 21:14:40.394: INFO: Waiting up to 5m0s for pod "downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972" in namespace "downward-api-7394" to be "success or failure" Mar 12 21:14:40.418: INFO: Pod "downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972": Phase="Pending", Reason="", readiness=false. Elapsed: 24.41653ms Mar 12 21:14:42.422: INFO: Pod "downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027753783s STEP: Saw pod success Mar 12 21:14:42.422: INFO: Pod "downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972" satisfied condition "success or failure" Mar 12 21:14:42.424: INFO: Trying to get logs from node jerma-worker pod downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972 container dapi-container: STEP: delete the pod Mar 12 21:14:42.456: INFO: Waiting for pod downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972 to disappear Mar 12 21:14:42.462: INFO: Pod downward-api-a05b3fb6-14b3-4d20-ab18-320ca32f9972 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:42.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7394" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:42.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9317 STEP: creating replication controller nodeport-test in namespace services-9317 I0312 21:14:42.600951 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9317, replica count: 2 I0312 21:14:45.651307 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 21:14:45.651: INFO: Creating new exec pod Mar 12 21:14:48.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9317 execpodcg49k -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 12 21:14:48.943: INFO: stderr: "I0312 21:14:48.871674 382 log.go:172] (0xc000a8b340) (0xc000ae0640) Create stream\nI0312 21:14:48.871736 382 log.go:172] (0xc000a8b340) (0xc000ae0640) Stream added, broadcasting: 1\nI0312 21:14:48.876007 382 log.go:172] (0xc000a8b340) Reply frame received for 1\nI0312 21:14:48.876049 382 log.go:172] (0xc000a8b340) (0xc000669cc0) Create stream\nI0312 21:14:48.876059 382 log.go:172] (0xc000a8b340) (0xc000669cc0) Stream added, broadcasting: 3\nI0312 21:14:48.876916 382 log.go:172] (0xc000a8b340) Reply frame received for 3\nI0312 21:14:48.876940 382 log.go:172] (0xc000a8b340) (0xc0005c08c0) Create stream\nI0312 21:14:48.876950 382 log.go:172] (0xc000a8b340) (0xc0005c08c0) Stream added, broadcasting: 5\nI0312 21:14:48.877773 382 log.go:172] (0xc000a8b340) Reply frame received for 5\nI0312 21:14:48.936948 382 log.go:172] (0xc000a8b340) Data frame received for 5\nI0312 21:14:48.936976 382 log.go:172] (0xc0005c08c0) (5) Data frame handling\nI0312 21:14:48.936995 382 log.go:172] (0xc0005c08c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0312 21:14:48.937800 382 log.go:172] (0xc000a8b340) Data frame received for 5\nI0312 21:14:48.937820 382 log.go:172] (0xc0005c08c0) (5) Data frame handling\nI0312 21:14:48.937836 382 log.go:172] (0xc0005c08c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0312 21:14:48.938944 382 log.go:172] (0xc000a8b340) Data frame received for 3\nI0312 21:14:48.938966 382 log.go:172] (0xc000a8b340) Data frame received for 5\nI0312 21:14:48.938986 382 log.go:172] (0xc0005c08c0) (5) Data frame handling\nI0312 21:14:48.939002 382 log.go:172] (0xc000669cc0) (3) Data frame handling\nI0312 21:14:48.939720 382 log.go:172] (0xc000a8b340) Data frame received for 1\nI0312 21:14:48.939736 382 log.go:172] (0xc000ae0640) (1) Data frame handling\nI0312 21:14:48.939744 382 log.go:172] (0xc000ae0640) (1) Data frame sent\nI0312 21:14:48.939868 382 log.go:172] (0xc000a8b340) (0xc000ae0640) Stream removed, broadcasting: 1\nI0312 21:14:48.940040 382 log.go:172] (0xc000a8b340) Go away received\nI0312 21:14:48.940170 382 log.go:172] (0xc000a8b340) (0xc000ae0640) Stream removed, broadcasting: 1\nI0312 21:14:48.940185 382 log.go:172] (0xc000a8b340) (0xc000669cc0) Stream removed, broadcasting: 3\nI0312 21:14:48.940193 382 log.go:172] (0xc000a8b340) (0xc0005c08c0) Stream removed, broadcasting: 5\n" Mar 12 21:14:48.943: INFO: stdout: "" Mar 12 21:14:48.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9317 execpodcg49k -- /bin/sh -x -c nc -zv -t -w 2 10.99.248.222 80' Mar 12 21:14:49.124: INFO: stderr: "I0312 21:14:49.052359 402 log.go:172] (0xc000ac18c0) (0xc000a106e0) Create stream\nI0312 21:14:49.052403 402 log.go:172] (0xc000ac18c0) (0xc000a106e0) Stream added, broadcasting: 1\nI0312 21:14:49.055468 402 log.go:172] (0xc000ac18c0) Reply frame received for 1\nI0312 21:14:49.055496 402 log.go:172] (0xc000ac18c0) (0xc00067c500) Create stream\nI0312 21:14:49.055504 402 log.go:172] (0xc000ac18c0) (0xc00067c500) Stream added, broadcasting: 3\nI0312 21:14:49.056083 402 log.go:172] (0xc000ac18c0) Reply frame received for 3\nI0312 21:14:49.056106 402 log.go:172] (0xc000ac18c0) (0xc0004272c0) Create stream\nI0312 21:14:49.056113 402 log.go:172] (0xc000ac18c0) (0xc0004272c0) Stream added, broadcasting: 5\nI0312 21:14:49.056783 402 log.go:172] (0xc000ac18c0) Reply frame received for 5\nI0312 21:14:49.120254 402 log.go:172] (0xc000ac18c0) Data frame received for 5\nI0312 21:14:49.120291 402 log.go:172] (0xc0004272c0) (5) Data frame handling\nI0312 21:14:49.120305 402 log.go:172] (0xc0004272c0) (5) Data frame sent\nI0312 21:14:49.120314 402 log.go:172] (0xc000ac18c0) Data frame received for 5\nI0312 21:14:49.120322 402 log.go:172] (0xc0004272c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.248.222 80\nConnection to 10.99.248.222 80 port [tcp/http] succeeded!\nI0312 21:14:49.120345 402 log.go:172] (0xc000ac18c0) Data frame received for 3\nI0312 21:14:49.120361 402 log.go:172] (0xc00067c500) (3) Data frame handling\nI0312 21:14:49.121497 402 log.go:172] (0xc000ac18c0) Data frame received for 1\nI0312 21:14:49.121513 402 log.go:172] (0xc000a106e0) (1) Data frame handling\nI0312 21:14:49.121519 402 log.go:172] (0xc000a106e0) (1) Data frame sent\nI0312 21:14:49.121530 402 log.go:172] (0xc000ac18c0) (0xc000a106e0) Stream removed, broadcasting: 1\nI0312 21:14:49.121563 402 log.go:172] (0xc000ac18c0) Go away received\nI0312 21:14:49.121874 402 log.go:172] (0xc000ac18c0) (0xc000a106e0) Stream removed, broadcasting: 1\nI0312 21:14:49.121887 402 log.go:172] (0xc000ac18c0) (0xc00067c500) Stream removed, broadcasting: 3\nI0312 21:14:49.121894 402 log.go:172] (0xc000ac18c0) (0xc0004272c0) Stream removed, broadcasting: 5\n" Mar 12 21:14:49.124: INFO: stdout: "" Mar 12 21:14:49.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9317 execpodcg49k -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31729' Mar 12 21:14:49.315: INFO: stderr: "I0312 21:14:49.228143 422 log.go:172] (0xc000542d10) (0xc0009d6000) Create stream\nI0312 21:14:49.228189 422 log.go:172] (0xc000542d10) (0xc0009d6000) Stream added, broadcasting: 1\nI0312 21:14:49.229809 422 log.go:172] (0xc000542d10) Reply frame received for 1\nI0312 21:14:49.229829 422 log.go:172] (0xc000542d10) (0xc0009d60a0) Create stream\nI0312 21:14:49.229835 422 log.go:172] (0xc000542d10) (0xc0009d60a0) Stream added, broadcasting: 3\nI0312 21:14:49.230422 422 log.go:172] (0xc000542d10) Reply frame received for 3\nI0312 21:14:49.230448 422 log.go:172] (0xc000542d10) (0xc00067fae0) Create stream\nI0312 21:14:49.230456 422 log.go:172] (0xc000542d10) (0xc00067fae0) Stream added, broadcasting: 5\nI0312 21:14:49.231203 422 log.go:172] (0xc000542d10) Reply frame received for 5\nI0312 21:14:49.311510 422 log.go:172] (0xc000542d10) Data frame received for 3\nI0312 21:14:49.311545 422 log.go:172] (0xc0009d60a0) (3) Data frame handling\nI0312 21:14:49.311768 422 log.go:172] (0xc000542d10) Data frame received for 5\nI0312 21:14:49.311783 422 log.go:172] (0xc00067fae0) (5) Data frame handling\nI0312 21:14:49.311800 422 log.go:172] (0xc00067fae0) (5) Data frame sent\nI0312 21:14:49.311810 422 log.go:172] (0xc000542d10) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.4 31729\nConnection to 172.17.0.4 31729 port [tcp/31729] succeeded!\nI0312 21:14:49.311817 422 log.go:172] (0xc00067fae0) (5) Data frame handling\nI0312 21:14:49.312819 422 log.go:172] (0xc000542d10) Data frame received for 1\nI0312 21:14:49.312830 422 log.go:172] (0xc0009d6000) (1) Data frame handling\nI0312 21:14:49.312836 422 log.go:172] (0xc0009d6000) (1) Data frame sent\nI0312 21:14:49.312846 422 log.go:172] (0xc000542d10) (0xc0009d6000) Stream removed, broadcasting: 1\nI0312 21:14:49.312876 422 log.go:172] (0xc000542d10) Go away received\nI0312 21:14:49.313051 422 log.go:172] (0xc000542d10) (0xc0009d6000) Stream removed, broadcasting: 1\nI0312 21:14:49.313059 422 log.go:172] (0xc000542d10) (0xc0009d60a0) Stream removed, broadcasting: 3\nI0312 21:14:49.313064 422 log.go:172] (0xc000542d10) (0xc00067fae0) Stream removed, broadcasting: 5\n" Mar 12 21:14:49.315: INFO: stdout: "" Mar 12 21:14:49.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9317 execpodcg49k -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31729' Mar 12 21:14:49.484: INFO: stderr: "I0312 21:14:49.410953 441 log.go:172] (0xc000a094a0) (0xc000a36820) Create stream\nI0312 21:14:49.410999 441 log.go:172] (0xc000a094a0) (0xc000a36820) Stream added, broadcasting: 1\nI0312 21:14:49.414535 441 log.go:172] (0xc000a094a0) Reply frame received for 1\nI0312 21:14:49.414566 441 log.go:172] (0xc000a094a0) (0xc0006386e0) Create stream\nI0312 21:14:49.414574 441 log.go:172] (0xc000a094a0) (0xc0006386e0) Stream added, broadcasting: 3\nI0312 21:14:49.415228 441 log.go:172] (0xc000a094a0) Reply frame received for 3\nI0312 21:14:49.415270 441 log.go:172] (0xc000a094a0) (0xc00070d4a0) Create stream\nI0312 21:14:49.415278 441 log.go:172] (0xc000a094a0) (0xc00070d4a0) Stream added, broadcasting: 5\nI0312 21:14:49.415986 441 log.go:172] (0xc000a094a0) Reply frame received for 5\nI0312 21:14:49.480471 441 log.go:172] (0xc000a094a0) Data frame received for 5\nI0312 21:14:49.480510 441 log.go:172] (0xc00070d4a0) (5) Data frame handling\nI0312 21:14:49.480522 441 log.go:172] (0xc00070d4a0) (5) Data frame sent\nI0312 21:14:49.480531 441 log.go:172] (0xc000a094a0) Data frame received for 5\nI0312 21:14:49.480536 441 log.go:172] (0xc00070d4a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 31729\nConnection to 172.17.0.5 31729 port [tcp/31729] succeeded!\nI0312 21:14:49.480561 441 log.go:172] (0xc000a094a0) Data frame received for 3\nI0312 21:14:49.480569 441 log.go:172] (0xc0006386e0) (3) Data frame handling\nI0312 21:14:49.481576 441 log.go:172] (0xc000a094a0) Data frame received for 1\nI0312 21:14:49.481608 441 log.go:172] (0xc000a36820) (1) Data frame handling\nI0312 21:14:49.481624 441 log.go:172] (0xc000a36820) (1) Data frame sent\nI0312 21:14:49.481637 441 log.go:172] (0xc000a094a0) (0xc000a36820) Stream removed, broadcasting: 1\nI0312 21:14:49.481651 441 log.go:172] (0xc000a094a0) Go away received\nI0312 21:14:49.481916 441 log.go:172] (0xc000a094a0) (0xc000a36820) Stream removed, broadcasting: 1\nI0312 21:14:49.481937 441 log.go:172] (0xc000a094a0) (0xc0006386e0) Stream removed, broadcasting: 3\nI0312 21:14:49.481944 441 log.go:172] (0xc000a094a0) (0xc00070d4a0) Stream removed, broadcasting: 5\n" Mar 12 21:14:49.485: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:49.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9317" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.023 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":36,"skipped":505,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:49.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5175/configmap-test-9d18a6a2-499a-4a56-b795-c499b8338219 STEP: Creating a pod to test consume configMaps Mar 12 21:14:49.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599" in namespace "configmap-5175" to be "success or failure" Mar 12 21:14:49.609: INFO: Pod "pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599": Phase="Pending", Reason="", readiness=false. Elapsed: 19.140185ms Mar 12 21:14:51.613: INFO: Pod "pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022646661s STEP: Saw pod success Mar 12 21:14:51.613: INFO: Pod "pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599" satisfied condition "success or failure" Mar 12 21:14:51.615: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599 container env-test: STEP: delete the pod Mar 12 21:14:51.631: INFO: Waiting for pod pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599 to disappear Mar 12 21:14:51.636: INFO: Pod pod-configmaps-d0709d77-b576-4e95-a961-c6e5e5cf5599 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:51.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5175" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:51.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:14:52.331: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:14:55.385: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:14:55.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2892" for this suite. STEP: Destroying namespace "webhook-2892-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":38,"skipped":526,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:14:55.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 12 21:14:57.568: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 12 21:15:07.689: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:07.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3492" for this suite. • [SLOW TEST:12.212 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":39,"skipped":532,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:07.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:15:07.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1" in namespace "projected-1259" to be "success or failure" Mar 12 21:15:07.829: INFO: Pod "downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1": Phase="Pending", Reason="", readiness=false. Elapsed: 41.648722ms Mar 12 21:15:09.833: INFO: Pod "downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1": Phase="Running", Reason="", readiness=true. Elapsed: 2.045702368s Mar 12 21:15:11.836: INFO: Pod "downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048440223s STEP: Saw pod success Mar 12 21:15:11.836: INFO: Pod "downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1" satisfied condition "success or failure" Mar 12 21:15:11.838: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1 container client-container: STEP: delete the pod Mar 12 21:15:11.904: INFO: Waiting for pod downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1 to disappear Mar 12 21:15:11.910: INFO: Pod downwardapi-volume-d7e0c09c-4624-4254-bac9-824dacc75de1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:11.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1259" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":541,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:11.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3413" for this suite. • [SLOW TEST:16.261 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":41,"skipped":543,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:28.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 21:15:28.285: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 21:15:28.295: INFO: Waiting for terminating namespaces to be deleted... Mar 12 21:15:28.318: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 21:15:28.328: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:15:28.328: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:15:28.328: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:15:28.328: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:15:28.328: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 21:15:28.332: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:15:28.332: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:15:28.332: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:15:28.332: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fbab042b24a9c2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:29.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2354" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":42,"skipped":562,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:29.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:15:29.418: INFO: Creating ReplicaSet my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1 Mar 12 21:15:29.468: INFO: Pod name my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1: Found 0 pods out of 1 Mar 12 21:15:34.474: INFO: Pod name my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1: Found 1 pods out of 1 Mar 12 21:15:34.474: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1" is running Mar 12 21:15:34.480: INFO: Pod "my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1-99clw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:15:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:15:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:15:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:15:29 +0000 UTC Reason: Message:}]) Mar 12 21:15:34.480: INFO: Trying to dial the pod Mar 12 21:15:39.491: INFO: Controller my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1: Got expected result from replica 1 [my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1-99clw]: "my-hostname-basic-f6cee4ef-55db-488d-b59d-9d07efcc40b1-99clw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:39.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8387" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":43,"skipped":565,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:39.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 21:15:42.119: INFO: Successfully updated pod "annotationupdate63027f1b-f228-4b9d-9cd6-b903569f6405" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:44.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3184" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:44.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ed86ce43-c329-466d-a21f-cabdbd96d5a1 STEP: Creating a pod to test consume secrets Mar 12 21:15:44.220: INFO: Waiting up to 5m0s for pod "pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6" in namespace "secrets-2603" to be "success or failure" Mar 12 21:15:44.225: INFO: Pod "pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.702938ms Mar 12 21:15:46.228: INFO: Pod "pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007970169s STEP: Saw pod success Mar 12 21:15:46.228: INFO: Pod "pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6" satisfied condition "success or failure" Mar 12 21:15:46.230: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6 container secret-volume-test: STEP: delete the pod Mar 12 21:15:46.285: INFO: Waiting for pod pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6 to disappear Mar 12 21:15:46.291: INFO: Pod pod-secrets-3d0d2538-03d1-4798-b442-5ab9f62dc1f6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:46.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2603" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:46.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:46.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4166" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":46,"skipped":662,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:46.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-996c64c5-7a3d-43ee-bebf-d4019937d259 STEP: Creating a pod to test consume secrets Mar 12 21:15:46.407: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019" in namespace "projected-3074" to be "success or failure" Mar 12 21:15:46.411: INFO: Pod "pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516855ms Mar 12 21:15:48.415: INFO: Pod "pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00811827s STEP: Saw pod success Mar 12 21:15:48.415: INFO: Pod "pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019" satisfied condition "success or failure" Mar 12 21:15:48.418: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019 container projected-secret-volume-test: STEP: delete the pod Mar 12 21:15:48.453: INFO: Waiting for pod pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019 to disappear Mar 12 21:15:48.612: INFO: Pod pod-projected-secrets-205ce402-8b90-410f-8ae4-b70c60971019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:48.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3074" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:48.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:15:48.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68" in namespace "projected-6609" to be "success or failure" Mar 12 21:15:48.951: INFO: Pod "downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110235ms Mar 12 21:15:50.955: INFO: Pod "downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014157268s STEP: Saw pod success Mar 12 21:15:50.955: INFO: Pod "downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68" satisfied condition "success or failure" Mar 12 21:15:50.958: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68 container client-container: STEP: delete the pod Mar 12 21:15:50.979: INFO: Waiting for pod downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68 to disappear Mar 12 21:15:50.984: INFO: Pod downwardapi-volume-bd02e05a-3412-4377-ba8d-7c455444cb68 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:50.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6609" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":752,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:50.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-3a0f7761-20ba-4f6d-8cdd-5eb41ed3548e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:15:51.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3209" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":49,"skipped":762,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:15:51.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:15:51.128: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3388 I0312 21:15:51.142350 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3388, replica count: 1 I0312 21:15:52.192655 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 21:15:53.192820 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 21:15:54.192988 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 21:15:54.316: INFO: Created: latency-svc-cktx8 Mar 12 21:15:54.344: INFO: Got endpoints: latency-svc-cktx8 [51.240476ms] Mar 12 21:15:54.376: INFO: Created: latency-svc-ntxzn Mar 12 21:15:54.390: INFO: Got endpoints: latency-svc-ntxzn [45.862755ms] Mar 12 21:15:54.406: INFO: Created: latency-svc-sg6t4 Mar 12 21:15:54.412: INFO: Got endpoints: latency-svc-sg6t4 [67.951479ms] Mar 12 21:15:54.431: INFO: Created: latency-svc-txxst Mar 12 21:15:54.437: INFO: Got endpoints: latency-svc-txxst [92.747804ms] Mar 12 21:15:54.493: INFO: Created: latency-svc-jwzhr Mar 12 21:15:54.497: INFO: Got endpoints: latency-svc-jwzhr [152.90644ms] Mar 12 21:15:54.527: INFO: Created: latency-svc-9z6wx Mar 12 21:15:54.530: INFO: Got endpoints: latency-svc-9z6wx [185.898261ms] Mar 12 21:15:54.567: INFO: Created: latency-svc-xnxwx Mar 12 21:15:54.573: INFO: Got endpoints: latency-svc-xnxwx [228.477727ms] Mar 12 21:15:54.592: INFO: Created: latency-svc-h928k Mar 12 21:15:54.619: INFO: Got endpoints: latency-svc-h928k [274.293706ms] Mar 12 21:15:54.640: INFO: Created: latency-svc-n2547 Mar 12 21:15:54.648: INFO: Got endpoints: latency-svc-n2547 [303.893956ms] Mar 12 21:15:54.671: INFO: Created: latency-svc-k78jw Mar 12 21:15:54.674: INFO: Got endpoints: latency-svc-k78jw [329.279509ms] Mar 12 21:15:54.702: INFO: Created: latency-svc-d9qqx Mar 12 21:15:54.706: INFO: Got endpoints: latency-svc-d9qqx [362.037816ms] Mar 12 21:15:54.750: INFO: Created: latency-svc-scrk7 Mar 12 21:15:54.763: INFO: Got endpoints: latency-svc-scrk7 [418.528668ms] Mar 12 21:15:54.790: INFO: Created: latency-svc-p4sdt Mar 12 21:15:54.799: INFO: Got endpoints: latency-svc-p4sdt [454.480379ms] Mar 12 21:15:54.844: INFO: Created: latency-svc-z8c89 Mar 12 21:15:54.881: INFO: Got endpoints: latency-svc-z8c89 [537.27285ms] Mar 12 21:15:54.933: INFO: Created: latency-svc-s5h48 Mar 12 21:15:54.964: INFO: Got endpoints: latency-svc-s5h48 [619.795411ms] Mar 12 21:15:55.032: INFO: Created: latency-svc-vnrp8 Mar 12 21:15:55.035: INFO: Got endpoints: latency-svc-vnrp8 [690.898127ms] Mar 12 21:15:55.072: INFO: Created: latency-svc-hfttl Mar 12 21:15:55.079: INFO: Got endpoints: latency-svc-hfttl [689.026227ms] Mar 12 21:15:55.114: INFO: Created: latency-svc-rtslm Mar 12 21:15:55.117: INFO: Got endpoints: latency-svc-rtslm [704.590411ms] Mar 12 21:15:55.169: INFO: Created: latency-svc-n72l2 Mar 12 21:15:55.174: INFO: Got endpoints: latency-svc-n72l2 [737.312378ms] Mar 12 21:15:55.212: INFO: Created: latency-svc-4gghl Mar 12 21:15:55.219: INFO: Got endpoints: latency-svc-4gghl [721.798684ms] Mar 12 21:15:55.246: INFO: Created: latency-svc-kfggf Mar 12 21:15:55.257: INFO: Got endpoints: latency-svc-kfggf [726.614409ms] Mar 12 21:15:55.300: INFO: Created: latency-svc-rjvl5 Mar 12 21:15:55.304: INFO: Got endpoints: latency-svc-rjvl5 [731.762637ms] Mar 12 21:15:55.331: INFO: Created: latency-svc-clmc5 Mar 12 21:15:55.335: INFO: Got endpoints: latency-svc-clmc5 [716.291811ms] Mar 12 21:15:55.355: INFO: Created: latency-svc-q87qc Mar 12 21:15:55.365: INFO: Got endpoints: latency-svc-q87qc [717.370113ms] Mar 12 21:15:55.386: INFO: Created: latency-svc-zfkd6 Mar 12 21:15:55.432: INFO: Got endpoints: latency-svc-zfkd6 [758.909313ms] Mar 12 21:15:55.456: INFO: Created: latency-svc-kpqzd Mar 12 21:15:55.480: INFO: Got endpoints: latency-svc-kpqzd [774.329228ms] Mar 12 21:15:55.511: INFO: Created: latency-svc-hd2z2 Mar 12 21:15:55.520: INFO: Got endpoints: latency-svc-hd2z2 [757.546016ms] Mar 12 21:15:55.564: INFO: Created: latency-svc-bmg48 Mar 12 21:15:55.567: INFO: Got endpoints: latency-svc-bmg48 [768.046985ms] Mar 12 21:15:55.589: INFO: Created: latency-svc-gxd88 Mar 12 21:15:55.594: INFO: Got endpoints: latency-svc-gxd88 [712.857369ms] Mar 12 21:15:55.612: INFO: Created: latency-svc-dth6m Mar 12 21:15:55.618: INFO: Got endpoints: latency-svc-dth6m [654.293057ms] Mar 12 21:15:55.649: INFO: Created: latency-svc-kq24t Mar 12 21:15:55.655: INFO: Got endpoints: latency-svc-kq24t [620.571594ms] Mar 12 21:15:55.708: INFO: Created: latency-svc-wmbxz Mar 12 21:15:55.711: INFO: Got endpoints: latency-svc-wmbxz [631.994973ms] Mar 12 21:15:55.744: INFO: Created: latency-svc-7448f Mar 12 21:15:55.752: INFO: Got endpoints: latency-svc-7448f [635.510774ms] Mar 12 21:15:55.768: INFO: Created: latency-svc-d6h2p Mar 12 21:15:55.776: INFO: Got endpoints: latency-svc-d6h2p [601.729814ms] Mar 12 21:15:55.800: INFO: Created: latency-svc-bs225 Mar 12 21:15:55.806: INFO: Got endpoints: latency-svc-bs225 [587.085655ms] Mar 12 21:15:55.853: INFO: Created: latency-svc-4ckt7 Mar 12 21:15:55.858: INFO: Got endpoints: latency-svc-4ckt7 [601.11784ms] Mar 12 21:15:55.877: INFO: Created: latency-svc-ql27f Mar 12 21:15:55.880: INFO: Got endpoints: latency-svc-ql27f [576.015841ms] Mar 12 21:15:55.908: INFO: Created: latency-svc-cv2lk Mar 12 21:15:55.930: INFO: Got endpoints: latency-svc-cv2lk [595.093639ms] Mar 12 21:15:55.991: INFO: Created: latency-svc-fgvd6 Mar 12 21:15:55.993: INFO: Got endpoints: latency-svc-fgvd6 [627.152468ms] Mar 12 21:15:56.021: INFO: Created: latency-svc-kjdqh Mar 12 21:15:56.039: INFO: Got endpoints: latency-svc-kjdqh [606.840796ms] Mar 12 21:15:56.063: INFO: Created: latency-svc-xzpcw Mar 12 21:15:56.081: INFO: Got endpoints: latency-svc-xzpcw [600.316242ms] Mar 12 21:15:56.127: INFO: Created: latency-svc-7jtv4 Mar 12 21:15:56.134: INFO: Got endpoints: latency-svc-7jtv4 [613.828468ms] Mar 12 21:15:56.165: INFO: Created: latency-svc-trzl8 Mar 12 21:15:56.178: INFO: Got endpoints: latency-svc-trzl8 [611.534925ms] Mar 12 21:15:56.214: INFO: Created: latency-svc-z88w2 Mar 12 21:15:56.223: INFO: Got endpoints: latency-svc-z88w2 [628.253801ms] Mar 12 21:15:56.284: INFO: Created: latency-svc-9l7wk Mar 12 21:15:56.289: INFO: Got endpoints: latency-svc-9l7wk [671.046588ms] Mar 12 21:15:56.310: INFO: Created: latency-svc-fbzjz Mar 12 21:15:56.314: INFO: Got endpoints: latency-svc-fbzjz [658.227607ms] Mar 12 21:15:56.339: INFO: Created: latency-svc-pdg8s Mar 12 21:15:56.344: INFO: Got endpoints: latency-svc-pdg8s [632.711892ms] Mar 12 21:15:56.369: INFO: Created: latency-svc-bxmds Mar 12 21:15:56.375: INFO: Got endpoints: latency-svc-bxmds [622.341218ms] Mar 12 21:15:56.440: INFO: Created: latency-svc-c6wlz Mar 12 21:15:56.447: INFO: Got endpoints: latency-svc-c6wlz [670.687227ms] Mar 12 21:15:56.471: INFO: Created: latency-svc-kxbn2 Mar 12 21:15:56.476: INFO: Got endpoints: latency-svc-kxbn2 [670.278896ms] Mar 12 21:15:56.513: INFO: Created: latency-svc-ddhcq Mar 12 21:15:56.519: INFO: Got endpoints: latency-svc-ddhcq [660.609143ms] Mar 12 21:15:56.602: INFO: Created: latency-svc-ntn8s Mar 12 21:15:56.609: INFO: Got endpoints: latency-svc-ntn8s [728.513662ms] Mar 12 21:15:56.632: INFO: Created: latency-svc-ht4sj Mar 12 21:15:56.639: INFO: Got endpoints: latency-svc-ht4sj [708.904517ms] Mar 12 21:15:56.693: INFO: Created: latency-svc-jjs6z Mar 12 21:15:56.726: INFO: Got endpoints: latency-svc-jjs6z [733.173697ms] Mar 12 21:15:56.746: INFO: Created: latency-svc-777dj Mar 12 21:15:56.754: INFO: Got endpoints: latency-svc-777dj [714.321215ms] Mar 12 21:15:56.777: INFO: Created: latency-svc-p5nsv Mar 12 21:15:56.790: INFO: Got endpoints: latency-svc-p5nsv [709.652234ms] Mar 12 21:15:56.864: INFO: Created: latency-svc-9vvh7 Mar 12 21:15:56.866: INFO: Got endpoints: latency-svc-9vvh7 [731.657347ms] Mar 12 21:15:56.886: INFO: Created: latency-svc-864v8 Mar 12 21:15:56.909: INFO: Got endpoints: latency-svc-864v8 [730.304411ms] Mar 12 21:15:56.909: INFO: Created: latency-svc-55nvs Mar 12 21:15:56.917: INFO: Got endpoints: latency-svc-55nvs [694.581254ms] Mar 12 21:15:56.944: INFO: Created: latency-svc-k9xq5 Mar 12 21:15:56.953: INFO: Got endpoints: latency-svc-k9xq5 [663.768922ms] Mar 12 21:15:57.013: INFO: Created: latency-svc-l7jcc Mar 12 21:15:57.020: INFO: Got endpoints: latency-svc-l7jcc [705.960014ms] Mar 12 21:15:57.035: INFO: Created: latency-svc-kms4q Mar 12 21:15:57.038: INFO: Got endpoints: latency-svc-kms4q [694.156249ms] Mar 12 21:15:57.065: INFO: Created: latency-svc-592g6 Mar 12 21:15:57.068: INFO: Got endpoints: latency-svc-592g6 [693.720913ms] Mar 12 21:15:57.145: INFO: Created: latency-svc-rpmfh Mar 12 21:15:57.147: INFO: Got endpoints: latency-svc-rpmfh [700.390478ms] Mar 12 21:15:57.173: INFO: Created: latency-svc-hxmtc Mar 12 21:15:57.176: INFO: Got endpoints: latency-svc-hxmtc [700.230608ms] Mar 12 21:15:57.219: INFO: Created: latency-svc-x75hp Mar 12 21:15:57.220: INFO: Got endpoints: latency-svc-x75hp [701.436246ms] Mar 12 21:15:57.289: INFO: Created: latency-svc-mkfqg Mar 12 21:15:57.298: INFO: Got endpoints: latency-svc-mkfqg [688.674586ms] Mar 12 21:15:57.366: INFO: Created: latency-svc-rbnlq Mar 12 21:15:57.369: INFO: Got endpoints: latency-svc-rbnlq [729.956902ms] Mar 12 21:15:57.433: INFO: Created: latency-svc-29k9q Mar 12 21:15:57.435: INFO: Got endpoints: latency-svc-29k9q [709.131401ms] Mar 12 21:15:57.497: INFO: Created: latency-svc-999rk Mar 12 21:15:57.508: INFO: Got endpoints: latency-svc-999rk [753.833172ms] Mar 12 21:15:57.576: INFO: Created: latency-svc-7lcjt Mar 12 21:15:57.582: INFO: Got endpoints: latency-svc-7lcjt [791.898777ms] Mar 12 21:15:57.612: INFO: Created: latency-svc-xl2bx Mar 12 21:15:57.616: INFO: Got endpoints: latency-svc-xl2bx [750.268306ms] Mar 12 21:15:57.652: INFO: Created: latency-svc-44fpc Mar 12 21:15:57.658: INFO: Got endpoints: latency-svc-44fpc [749.715037ms] Mar 12 21:15:57.720: INFO: Created: latency-svc-27d6j Mar 12 21:15:57.722: INFO: Got endpoints: latency-svc-27d6j [804.719922ms] Mar 12 21:15:57.749: INFO: Created: latency-svc-mgz5p Mar 12 21:15:57.755: INFO: Got endpoints: latency-svc-mgz5p [801.966251ms] Mar 12 21:15:57.805: INFO: Created: latency-svc-zzqq7 Mar 12 21:15:57.809: INFO: Got endpoints: latency-svc-zzqq7 [789.506057ms] Mar 12 21:15:57.880: INFO: Created: latency-svc-rh5ft Mar 12 21:15:57.912: INFO: Got endpoints: latency-svc-rh5ft [873.859534ms] Mar 12 21:15:57.953: INFO: Created: latency-svc-mr8j7 Mar 12 21:15:57.960: INFO: Got endpoints: latency-svc-mr8j7 [891.50941ms] Mar 12 21:15:58.009: INFO: Created: latency-svc-jz6bn Mar 12 21:15:58.012: INFO: Got endpoints: latency-svc-jz6bn [865.175411ms] Mar 12 21:15:58.038: INFO: Created: latency-svc-lbv4t Mar 12 21:15:58.044: INFO: Got endpoints: latency-svc-lbv4t [867.965884ms] Mar 12 21:15:58.080: INFO: Created: latency-svc-8x9xf Mar 12 21:15:58.087: INFO: Got endpoints: latency-svc-8x9xf [866.875631ms] Mar 12 21:15:58.139: INFO: Created: latency-svc-s2xpr Mar 12 21:15:58.150: INFO: Got endpoints: latency-svc-s2xpr [852.525211ms] Mar 12 21:15:58.206: INFO: Created: latency-svc-tn6p8 Mar 12 21:15:58.213: INFO: Got endpoints: latency-svc-tn6p8 [844.344262ms] Mar 12 21:15:58.301: INFO: Created: latency-svc-wr7qp Mar 12 21:15:58.311: INFO: Got endpoints: latency-svc-wr7qp [875.56567ms] Mar 12 21:15:58.331: INFO: Created: latency-svc-lfk2j Mar 12 21:15:58.340: INFO: Got endpoints: latency-svc-lfk2j [832.437792ms] Mar 12 21:15:58.362: INFO: Created: latency-svc-8zr2m Mar 12 21:15:58.408: INFO: Got endpoints: latency-svc-8zr2m [825.881777ms] Mar 12 21:15:58.440: INFO: Created: latency-svc-hwgrc Mar 12 21:15:58.449: INFO: Got endpoints: latency-svc-hwgrc [832.603186ms] Mar 12 21:15:58.476: INFO: Created: latency-svc-d5skv Mar 12 21:15:58.492: INFO: Got endpoints: latency-svc-d5skv [833.293103ms] Mar 12 21:15:58.558: INFO: Created: latency-svc-2xw8b Mar 12 21:15:58.572: INFO: Got endpoints: latency-svc-2xw8b [849.874272ms] Mar 12 21:15:58.596: INFO: Created: latency-svc-4kdvp Mar 12 21:15:58.600: INFO: Got endpoints: latency-svc-4kdvp [844.430993ms] Mar 12 21:15:58.620: INFO: Created: latency-svc-mq79l Mar 12 21:15:58.624: INFO: Got endpoints: latency-svc-mq79l [814.854535ms] Mar 12 21:15:58.643: INFO: Created: latency-svc-bjcg8 Mar 12 21:15:58.648: INFO: Got endpoints: latency-svc-bjcg8 [736.056587ms] Mar 12 21:15:58.702: INFO: Created: latency-svc-dbsz9 Mar 12 21:15:58.704: INFO: Got endpoints: latency-svc-dbsz9 [743.992437ms] Mar 12 21:15:58.734: INFO: Created: latency-svc-ww49r Mar 12 21:15:58.752: INFO: Got endpoints: latency-svc-ww49r [739.812545ms] Mar 12 21:15:58.769: INFO: Created: latency-svc-8ggwf Mar 12 21:15:58.775: INFO: Got endpoints: latency-svc-8ggwf [730.253961ms] Mar 12 21:15:58.793: INFO: Created: latency-svc-66kc8 Mar 12 21:15:58.800: INFO: Got endpoints: latency-svc-66kc8 [712.650226ms] Mar 12 21:15:58.840: INFO: Created: latency-svc-k997r Mar 12 21:15:58.848: INFO: Got endpoints: latency-svc-k997r [697.771739ms] Mar 12 21:15:58.867: INFO: Created: latency-svc-hnxj7 Mar 12 21:15:58.872: INFO: Got endpoints: latency-svc-hnxj7 [658.729787ms] Mar 12 21:15:58.891: INFO: Created: latency-svc-bhqzv Mar 12 21:15:58.897: INFO: Got endpoints: latency-svc-bhqzv [586.816037ms] Mar 12 21:15:58.913: INFO: Created: latency-svc-ntc89 Mar 12 21:15:58.921: INFO: Got endpoints: latency-svc-ntc89 [581.010992ms] Mar 12 21:15:58.937: INFO: Created: latency-svc-n2rcz Mar 12 21:15:58.939: INFO: Got endpoints: latency-svc-n2rcz [530.733096ms] Mar 12 21:15:58.985: INFO: Created: latency-svc-wzgxk Mar 12 21:15:58.993: INFO: Got endpoints: latency-svc-wzgxk [544.599392ms] Mar 12 21:15:59.010: INFO: Created: latency-svc-dz6h7 Mar 12 21:15:59.018: INFO: Got endpoints: latency-svc-dz6h7 [526.15834ms] Mar 12 21:15:59.035: INFO: Created: latency-svc-hd4lp Mar 12 21:15:59.042: INFO: Got endpoints: latency-svc-hd4lp [470.419493ms] Mar 12 21:15:59.059: INFO: Created: latency-svc-4np22 Mar 12 21:15:59.066: INFO: Got endpoints: latency-svc-4np22 [466.694573ms] Mar 12 21:15:59.083: INFO: Created: latency-svc-d6dxv Mar 12 21:15:59.127: INFO: Got endpoints: latency-svc-d6dxv [502.662211ms] Mar 12 21:15:59.142: INFO: Created: latency-svc-2845r Mar 12 21:15:59.151: INFO: Got endpoints: latency-svc-2845r [503.262516ms] Mar 12 21:15:59.171: INFO: Created: latency-svc-9cm86 Mar 12 21:15:59.175: INFO: Got endpoints: latency-svc-9cm86 [471.484735ms] Mar 12 21:15:59.197: INFO: Created: latency-svc-h2pxq Mar 12 21:15:59.200: INFO: Got endpoints: latency-svc-h2pxq [447.569086ms] Mar 12 21:15:59.221: INFO: Created: latency-svc-l2grs Mar 12 21:15:59.223: INFO: Got endpoints: latency-svc-l2grs [448.504847ms] Mar 12 21:15:59.273: INFO: Created: latency-svc-pschf Mar 12 21:15:59.273: INFO: Got endpoints: latency-svc-pschf [473.818931ms] Mar 12 21:15:59.316: INFO: Created: latency-svc-fdd77 Mar 12 21:15:59.320: INFO: Got endpoints: latency-svc-fdd77 [471.890961ms] Mar 12 21:15:59.339: INFO: Created: latency-svc-hd56z Mar 12 21:15:59.351: INFO: Got endpoints: latency-svc-hd56z [479.119041ms] Mar 12 21:15:59.409: INFO: Created: latency-svc-p2pxk Mar 12 21:15:59.417: INFO: Got endpoints: latency-svc-p2pxk [519.198064ms] Mar 12 21:15:59.442: INFO: Created: latency-svc-7g8cw Mar 12 21:15:59.447: INFO: Got endpoints: latency-svc-7g8cw [525.54078ms] Mar 12 21:15:59.466: INFO: Created: latency-svc-8mjwd Mar 12 21:15:59.477: INFO: Got endpoints: latency-svc-8mjwd [538.186479ms] Mar 12 21:15:59.496: INFO: Created: latency-svc-lhmkh Mar 12 21:15:59.564: INFO: Got endpoints: latency-svc-lhmkh [570.485662ms] Mar 12 21:15:59.567: INFO: Created: latency-svc-62m5w Mar 12 21:15:59.574: INFO: Got endpoints: latency-svc-62m5w [555.890272ms] Mar 12 21:15:59.599: INFO: Created: latency-svc-d7z7b Mar 12 21:15:59.604: INFO: Got endpoints: latency-svc-d7z7b [561.398323ms] Mar 12 21:15:59.622: INFO: Created: latency-svc-62z2v Mar 12 21:15:59.628: INFO: Got endpoints: latency-svc-62z2v [561.780477ms] Mar 12 21:15:59.645: INFO: Created: latency-svc-tfqlv Mar 12 21:15:59.653: INFO: Got endpoints: latency-svc-tfqlv [525.724083ms] Mar 12 21:15:59.702: INFO: Created: latency-svc-s682m Mar 12 21:15:59.704: INFO: Got endpoints: latency-svc-s682m [552.540188ms] Mar 12 21:15:59.729: INFO: Created: latency-svc-m7zgf Mar 12 21:15:59.737: INFO: Got endpoints: latency-svc-m7zgf [561.489001ms] Mar 12 21:15:59.754: INFO: Created: latency-svc-4rpcz Mar 12 21:15:59.762: INFO: Got endpoints: latency-svc-4rpcz [562.333115ms] Mar 12 21:15:59.790: INFO: Created: latency-svc-cp4hs Mar 12 21:15:59.858: INFO: Got endpoints: latency-svc-cp4hs [634.427389ms] Mar 12 21:15:59.860: INFO: Created: latency-svc-r5w9l Mar 12 21:15:59.864: INFO: Got endpoints: latency-svc-r5w9l [590.683348ms] Mar 12 21:15:59.886: INFO: Created: latency-svc-tnhdx Mar 12 21:15:59.894: INFO: Got endpoints: latency-svc-tnhdx [574.320465ms] Mar 12 21:15:59.916: INFO: Created: latency-svc-7gn99 Mar 12 21:15:59.934: INFO: Got endpoints: latency-svc-7gn99 [582.988014ms] Mar 12 21:15:59.995: INFO: Created: latency-svc-9fcws Mar 12 21:15:59.997: INFO: Got endpoints: latency-svc-9fcws [580.494318ms] Mar 12 21:16:00.024: INFO: Created: latency-svc-bf2r8 Mar 12 21:16:00.028: INFO: Got endpoints: latency-svc-bf2r8 [580.976075ms] Mar 12 21:16:00.060: INFO: Created: latency-svc-7r59v Mar 12 21:16:00.078: INFO: Got endpoints: latency-svc-7r59v [601.048949ms] Mar 12 21:16:00.127: INFO: Created: latency-svc-bxxrl Mar 12 21:16:00.131: INFO: Got endpoints: latency-svc-bxxrl [567.220789ms] Mar 12 21:16:00.157: INFO: Created: latency-svc-4lmc4 Mar 12 21:16:00.172: INFO: Got endpoints: latency-svc-4lmc4 [598.386149ms] Mar 12 21:16:00.222: INFO: Created: latency-svc-5vbnd Mar 12 21:16:00.271: INFO: Got endpoints: latency-svc-5vbnd [666.728723ms] Mar 12 21:16:00.283: INFO: Created: latency-svc-p7dvm Mar 12 21:16:00.286: INFO: Got endpoints: latency-svc-p7dvm [658.009491ms] Mar 12 21:16:00.306: INFO: Created: latency-svc-zbf69 Mar 12 21:16:00.311: INFO: Got endpoints: latency-svc-zbf69 [658.349935ms] Mar 12 21:16:00.330: INFO: Created: latency-svc-84c5j Mar 12 21:16:00.335: INFO: Got endpoints: latency-svc-84c5j [631.350498ms] Mar 12 21:16:00.354: INFO: Created: latency-svc-rsb8t Mar 12 21:16:00.360: INFO: Got endpoints: latency-svc-rsb8t [623.280327ms] Mar 12 21:16:00.409: INFO: Created: latency-svc-95qfh Mar 12 21:16:00.414: INFO: Got endpoints: latency-svc-95qfh [651.862883ms] Mar 12 21:16:00.433: INFO: Created: latency-svc-jd5dn Mar 12 21:16:00.438: INFO: Got endpoints: latency-svc-jd5dn [580.608694ms] Mar 12 21:16:00.457: INFO: Created: latency-svc-kq9cp Mar 12 21:16:00.463: INFO: Got endpoints: latency-svc-kq9cp [598.444335ms] Mar 12 21:16:00.485: INFO: Created: latency-svc-64q5n Mar 12 21:16:00.504: INFO: Got endpoints: latency-svc-64q5n [609.42159ms] Mar 12 21:16:00.558: INFO: Created: latency-svc-nmwn2 Mar 12 21:16:00.583: INFO: Got endpoints: latency-svc-nmwn2 [648.413042ms] Mar 12 21:16:00.583: INFO: Created: latency-svc-lwcsk Mar 12 21:16:00.589: INFO: Got endpoints: latency-svc-lwcsk [592.181378ms] Mar 12 21:16:00.613: INFO: Created: latency-svc-7x4nv Mar 12 21:16:00.615: INFO: Got endpoints: latency-svc-7x4nv [587.312025ms] Mar 12 21:16:00.643: INFO: Created: latency-svc-p7n6b Mar 12 21:16:00.645: INFO: Got endpoints: latency-svc-p7n6b [566.448363ms] Mar 12 21:16:00.696: INFO: Created: latency-svc-rmzc8 Mar 12 21:16:00.699: INFO: Got endpoints: latency-svc-rmzc8 [567.682476ms] Mar 12 21:16:00.720: INFO: Created: latency-svc-w6dnc Mar 12 21:16:00.729: INFO: Got endpoints: latency-svc-w6dnc [557.060449ms] Mar 12 21:16:00.744: INFO: Created: latency-svc-x2ntc Mar 12 21:16:00.753: INFO: Got endpoints: latency-svc-x2ntc [482.34744ms] Mar 12 21:16:00.775: INFO: Created: latency-svc-srzhr Mar 12 21:16:00.786: INFO: Got endpoints: latency-svc-srzhr [500.132833ms] Mar 12 21:16:00.827: INFO: Created: latency-svc-q7wbc Mar 12 21:16:00.830: INFO: Got endpoints: latency-svc-q7wbc [518.758274ms] Mar 12 21:16:00.852: INFO: Created: latency-svc-2bmql Mar 12 21:16:00.856: INFO: Got endpoints: latency-svc-2bmql [520.333129ms] Mar 12 21:16:00.876: INFO: Created: latency-svc-hvhng Mar 12 21:16:00.894: INFO: Got endpoints: latency-svc-hvhng [533.579975ms] Mar 12 21:16:00.913: INFO: Created: latency-svc-z2zj8 Mar 12 21:16:00.916: INFO: Got endpoints: latency-svc-z2zj8 [502.370627ms] Mar 12 21:16:00.965: INFO: Created: latency-svc-2whxc Mar 12 21:16:00.985: INFO: Got endpoints: latency-svc-2whxc [546.244736ms] Mar 12 21:16:00.985: INFO: Created: latency-svc-wtqdr Mar 12 21:16:01.002: INFO: Got endpoints: latency-svc-wtqdr [539.136261ms] Mar 12 21:16:01.020: INFO: Created: latency-svc-tggvr Mar 12 21:16:01.022: INFO: Got endpoints: latency-svc-tggvr [518.140668ms] Mar 12 21:16:01.044: INFO: Created: latency-svc-7lkcp Mar 12 21:16:01.049: INFO: Got endpoints: latency-svc-7lkcp [466.1018ms] Mar 12 21:16:01.097: INFO: Created: latency-svc-jdj9r Mar 12 21:16:01.124: INFO: Got endpoints: latency-svc-jdj9r [534.400244ms] Mar 12 21:16:01.158: INFO: Created: latency-svc-2n866 Mar 12 21:16:01.174: INFO: Got endpoints: latency-svc-2n866 [558.528635ms] Mar 12 21:16:01.195: INFO: Created: latency-svc-6wqp4 Mar 12 21:16:01.235: INFO: Got endpoints: latency-svc-6wqp4 [589.950109ms] Mar 12 21:16:01.239: INFO: Created: latency-svc-h4mxb Mar 12 21:16:01.255: INFO: Got endpoints: latency-svc-h4mxb [556.346197ms] Mar 12 21:16:01.279: INFO: Created: latency-svc-q2bq6 Mar 12 21:16:01.287: INFO: Got endpoints: latency-svc-q2bq6 [557.696119ms] Mar 12 21:16:01.351: INFO: Created: latency-svc-m89ss Mar 12 21:16:01.385: INFO: Got endpoints: latency-svc-m89ss [631.792851ms] Mar 12 21:16:01.399: INFO: Created: latency-svc-szzrp Mar 12 21:16:01.408: INFO: Got endpoints: latency-svc-szzrp [621.296292ms] Mar 12 21:16:01.429: INFO: Created: latency-svc-5zvk2 Mar 12 21:16:01.444: INFO: Got endpoints: latency-svc-5zvk2 [614.080788ms] Mar 12 21:16:01.466: INFO: Created: latency-svc-gmprr Mar 12 21:16:01.474: INFO: Got endpoints: latency-svc-gmprr [618.425253ms] Mar 12 21:16:01.528: INFO: Created: latency-svc-tn9gc Mar 12 21:16:01.555: INFO: Got endpoints: latency-svc-tn9gc [661.434797ms] Mar 12 21:16:01.557: INFO: Created: latency-svc-b67tv Mar 12 21:16:01.559: INFO: Got endpoints: latency-svc-b67tv [642.27675ms] Mar 12 21:16:01.603: INFO: Created: latency-svc-txrhz Mar 12 21:16:01.619: INFO: Got endpoints: latency-svc-txrhz [634.682165ms] Mar 12 21:16:01.656: INFO: Created: latency-svc-mvv27 Mar 12 21:16:01.662: INFO: Got endpoints: latency-svc-mvv27 [660.128027ms] Mar 12 21:16:01.681: INFO: Created: latency-svc-p586b Mar 12 21:16:01.686: INFO: Got endpoints: latency-svc-p586b [663.604192ms] Mar 12 21:16:01.704: INFO: Created: latency-svc-wmds8 Mar 12 21:16:01.710: INFO: Got endpoints: latency-svc-wmds8 [661.056672ms] Mar 12 21:16:01.729: INFO: Created: latency-svc-hsmmr Mar 12 21:16:01.734: INFO: Got endpoints: latency-svc-hsmmr [610.477255ms] Mar 12 21:16:01.754: INFO: Created: latency-svc-6vvgp Mar 12 21:16:01.792: INFO: Got endpoints: latency-svc-6vvgp [618.46683ms] Mar 12 21:16:01.807: INFO: Created: latency-svc-rbr5v Mar 12 21:16:01.813: INFO: Got endpoints: latency-svc-rbr5v [578.236363ms] Mar 12 21:16:01.832: INFO: Created: latency-svc-ctkcn Mar 12 21:16:01.833: INFO: Got endpoints: latency-svc-ctkcn [577.870681ms] Mar 12 21:16:01.873: INFO: Created: latency-svc-6rrkq Mar 12 21:16:01.874: INFO: Got endpoints: latency-svc-6rrkq [587.172733ms] Mar 12 21:16:01.935: INFO: Created: latency-svc-hhrqr Mar 12 21:16:01.947: INFO: Got endpoints: latency-svc-hhrqr [561.876244ms] Mar 12 21:16:01.981: INFO: Created: latency-svc-2qznl Mar 12 21:16:02.001: INFO: Got endpoints: latency-svc-2qznl [593.037465ms] Mar 12 21:16:02.061: INFO: Created: latency-svc-fvl6h Mar 12 21:16:02.083: INFO: Got endpoints: latency-svc-fvl6h [639.517121ms] Mar 12 21:16:02.085: INFO: Created: latency-svc-7vw94 Mar 12 21:16:02.091: INFO: Got endpoints: latency-svc-7vw94 [616.992365ms] Mar 12 21:16:02.108: INFO: Created: latency-svc-gk9sl Mar 12 21:16:02.115: INFO: Got endpoints: latency-svc-gk9sl [559.919348ms] Mar 12 21:16:02.131: INFO: Created: latency-svc-t5jmb Mar 12 21:16:02.140: INFO: Got endpoints: latency-svc-t5jmb [581.234889ms] Mar 12 21:16:02.162: INFO: Created: latency-svc-svfmv Mar 12 21:16:02.187: INFO: Got endpoints: latency-svc-svfmv [567.500656ms] Mar 12 21:16:02.209: INFO: Created: latency-svc-6dj8v Mar 12 21:16:02.225: INFO: Got endpoints: latency-svc-6dj8v [562.586362ms] Mar 12 21:16:02.253: INFO: Created: latency-svc-8xjl4 Mar 12 21:16:02.261: INFO: Got endpoints: latency-svc-8xjl4 [575.414808ms] Mar 12 21:16:02.282: INFO: Created: latency-svc-lfbsw Mar 12 21:16:02.285: INFO: Got endpoints: latency-svc-lfbsw [574.800342ms] Mar 12 21:16:02.331: INFO: Created: latency-svc-sgdnh Mar 12 21:16:02.347: INFO: Got endpoints: latency-svc-sgdnh [612.448362ms] Mar 12 21:16:02.365: INFO: Created: latency-svc-lczv5 Mar 12 21:16:02.371: INFO: Got endpoints: latency-svc-lczv5 [579.175726ms] Mar 12 21:16:02.390: INFO: Created: latency-svc-nnldh Mar 12 21:16:02.394: INFO: Got endpoints: latency-svc-nnldh [581.098511ms] Mar 12 21:16:02.415: INFO: Created: latency-svc-9z5pz Mar 12 21:16:02.470: INFO: Got endpoints: latency-svc-9z5pz [636.318909ms] Mar 12 21:16:02.497: INFO: Created: latency-svc-gmlc2 Mar 12 21:16:02.503: INFO: Got endpoints: latency-svc-gmlc2 [628.331681ms] Mar 12 21:16:02.527: INFO: Created: latency-svc-n6wzv Mar 12 21:16:02.539: INFO: Got endpoints: latency-svc-n6wzv [592.366227ms] Mar 12 21:16:02.600: INFO: Created: latency-svc-wr8xc Mar 12 21:16:02.603: INFO: Got endpoints: latency-svc-wr8xc [601.97679ms] Mar 12 21:16:02.636: INFO: Created: latency-svc-k5x27 Mar 12 21:16:02.642: INFO: Got endpoints: latency-svc-k5x27 [558.423777ms] Mar 12 21:16:02.659: INFO: Created: latency-svc-4gv6l Mar 12 21:16:02.689: INFO: Got endpoints: latency-svc-4gv6l [597.845094ms] Mar 12 21:16:02.689: INFO: Created: latency-svc-2qd7x Mar 12 21:16:02.738: INFO: Created: latency-svc-t7dwr Mar 12 21:16:02.739: INFO: Got endpoints: latency-svc-2qd7x [623.276342ms] Mar 12 21:16:02.761: INFO: Created: latency-svc-2tdpt Mar 12 21:16:02.786: INFO: Got endpoints: latency-svc-t7dwr [645.510357ms] Mar 12 21:16:02.786: INFO: Created: latency-svc-hqkfk Mar 12 21:16:02.829: INFO: Got endpoints: latency-svc-2tdpt [641.62464ms] Mar 12 21:16:02.879: INFO: Got endpoints: latency-svc-hqkfk [654.161672ms] Mar 12 21:16:02.879: INFO: Latencies: [45.862755ms 67.951479ms 92.747804ms 152.90644ms 185.898261ms 228.477727ms 274.293706ms 303.893956ms 329.279509ms 362.037816ms 418.528668ms 447.569086ms 448.504847ms 454.480379ms 466.1018ms 466.694573ms 470.419493ms 471.484735ms 471.890961ms 473.818931ms 479.119041ms 482.34744ms 500.132833ms 502.370627ms 502.662211ms 503.262516ms 518.140668ms 518.758274ms 519.198064ms 520.333129ms 525.54078ms 525.724083ms 526.15834ms 530.733096ms 533.579975ms 534.400244ms 537.27285ms 538.186479ms 539.136261ms 544.599392ms 546.244736ms 552.540188ms 555.890272ms 556.346197ms 557.060449ms 557.696119ms 558.423777ms 558.528635ms 559.919348ms 561.398323ms 561.489001ms 561.780477ms 561.876244ms 562.333115ms 562.586362ms 566.448363ms 567.220789ms 567.500656ms 567.682476ms 570.485662ms 574.320465ms 574.800342ms 575.414808ms 576.015841ms 577.870681ms 578.236363ms 579.175726ms 580.494318ms 580.608694ms 580.976075ms 581.010992ms 581.098511ms 581.234889ms 582.988014ms 586.816037ms 587.085655ms 587.172733ms 587.312025ms 589.950109ms 590.683348ms 592.181378ms 592.366227ms 593.037465ms 595.093639ms 597.845094ms 598.386149ms 598.444335ms 600.316242ms 601.048949ms 601.11784ms 601.729814ms 601.97679ms 606.840796ms 609.42159ms 610.477255ms 611.534925ms 612.448362ms 613.828468ms 614.080788ms 616.992365ms 618.425253ms 618.46683ms 619.795411ms 620.571594ms 621.296292ms 622.341218ms 623.276342ms 623.280327ms 627.152468ms 628.253801ms 628.331681ms 631.350498ms 631.792851ms 631.994973ms 632.711892ms 634.427389ms 634.682165ms 635.510774ms 636.318909ms 639.517121ms 641.62464ms 642.27675ms 645.510357ms 648.413042ms 651.862883ms 654.161672ms 654.293057ms 658.009491ms 658.227607ms 658.349935ms 658.729787ms 660.128027ms 660.609143ms 661.056672ms 661.434797ms 663.604192ms 663.768922ms 666.728723ms 670.278896ms 670.687227ms 671.046588ms 688.674586ms 689.026227ms 690.898127ms 693.720913ms 694.156249ms 694.581254ms 697.771739ms 700.230608ms 700.390478ms 701.436246ms 704.590411ms 705.960014ms 708.904517ms 709.131401ms 709.652234ms 712.650226ms 712.857369ms 714.321215ms 716.291811ms 717.370113ms 721.798684ms 726.614409ms 728.513662ms 729.956902ms 730.253961ms 730.304411ms 731.657347ms 731.762637ms 733.173697ms 736.056587ms 737.312378ms 739.812545ms 743.992437ms 749.715037ms 750.268306ms 753.833172ms 757.546016ms 758.909313ms 768.046985ms 774.329228ms 789.506057ms 791.898777ms 801.966251ms 804.719922ms 814.854535ms 825.881777ms 832.437792ms 832.603186ms 833.293103ms 844.344262ms 844.430993ms 849.874272ms 852.525211ms 865.175411ms 866.875631ms 867.965884ms 873.859534ms 875.56567ms 891.50941ms] Mar 12 21:16:02.879: INFO: 50 %ile: 618.425253ms Mar 12 21:16:02.879: INFO: 90 %ile: 774.329228ms Mar 12 21:16:02.879: INFO: 99 %ile: 875.56567ms Mar 12 21:16:02.879: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:02.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3388" for this suite. • [SLOW TEST:11.842 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":50,"skipped":775,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:02.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:16:03.479: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:16:05.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644563, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644563, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:16:08.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 12 21:16:08.557: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:08.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5192" for this suite. STEP: Destroying namespace "webhook-5192-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.885 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":51,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:08.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:16:09.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:16:11.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644569, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644569, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644569, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644569, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:16:14.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:15.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:16.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:17.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:18.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:19.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Mar 12 21:16:20.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:20.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5178" for this suite. STEP: Destroying namespace "webhook-5178-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.205 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":52,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:20.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c5afb349-97e7-4a9d-a243-b615edec1b3a STEP: Creating a pod to test consume secrets Mar 12 21:16:21.119: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c" in namespace "projected-576" to be "success or failure" Mar 12 21:16:21.143: INFO: Pod "pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.80834ms Mar 12 21:16:23.147: INFO: Pod "pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028121727s Mar 12 21:16:25.150: INFO: Pod "pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031102038s STEP: Saw pod success Mar 12 21:16:25.150: INFO: Pod "pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c" satisfied condition "success or failure" Mar 12 21:16:25.153: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c container projected-secret-volume-test: STEP: delete the pod Mar 12 21:16:25.174: INFO: Waiting for pod pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c to disappear Mar 12 21:16:25.194: INFO: Pod pod-projected-secrets-fed95fee-6a07-4c93-923f-c4a5874de72c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:25.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-576" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":844,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:25.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 12 21:16:25.258: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 12 21:16:25.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:25.521: INFO: stderr: "" Mar 12 21:16:25.521: INFO: stdout: "service/agnhost-slave created\n" Mar 12 21:16:25.521: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 12 21:16:25.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:25.769: INFO: stderr: "" Mar 12 21:16:25.769: INFO: stdout: "service/agnhost-master created\n" Mar 12 21:16:25.770: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 12 21:16:25.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:26.022: INFO: stderr: "" Mar 12 21:16:26.022: INFO: stdout: "service/frontend created\n" Mar 12 21:16:26.022: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 12 21:16:26.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:26.227: INFO: stderr: "" Mar 12 21:16:26.227: INFO: stdout: "deployment.apps/frontend created\n" Mar 12 21:16:26.228: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 21:16:26.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:26.443: INFO: stderr: "" Mar 12 21:16:26.443: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 12 21:16:26.443: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 21:16:26.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2932' Mar 12 21:16:26.666: INFO: stderr: "" Mar 12 21:16:26.666: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 12 21:16:26.666: INFO: Waiting for all frontend pods to be Running. Mar 12 21:16:31.717: INFO: Waiting for frontend to serve content. Mar 12 21:16:31.726: INFO: Trying to add a new entry to the guestbook. Mar 12 21:16:31.736: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 12 21:16:31.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:31.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:31.942: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 12 21:16:31.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:32.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:32.080: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 21:16:32.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:32.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:32.166: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 21:16:32.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:32.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:32.238: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 21:16:32.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:32.315: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:32.316: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 21:16:32.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2932' Mar 12 21:16:32.383: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 21:16:32.383: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2932" for this suite. • [SLOW TEST:7.204 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":54,"skipped":860,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:32.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-94aa3ca0-a695-46e8-827f-0b1b609b430e STEP: Creating a pod to test consume secrets Mar 12 21:16:32.618: INFO: Waiting up to 5m0s for pod "pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539" in namespace "secrets-8189" to be "success or failure" Mar 12 21:16:32.677: INFO: Pod "pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539": Phase="Pending", Reason="", readiness=false. Elapsed: 59.640826ms Mar 12 21:16:34.680: INFO: Pod "pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061984483s Mar 12 21:16:36.683: INFO: Pod "pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065096998s STEP: Saw pod success Mar 12 21:16:36.683: INFO: Pod "pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539" satisfied condition "success or failure" Mar 12 21:16:36.685: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539 container secret-volume-test: STEP: delete the pod Mar 12 21:16:36.729: INFO: Waiting for pod pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539 to disappear Mar 12 21:16:36.737: INFO: Pod pod-secrets-b6bf204f-28d7-488e-91ef-013a9466c539 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:36.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8189" for this suite. STEP: Destroying namespace "secret-namespace-4198" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":870,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:36.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b4289741-e85a-43ac-bc92-43c904b3e27d STEP: Creating a pod to test consume secrets Mar 12 21:16:36.847: INFO: Waiting up to 5m0s for pod "pod-secrets-968922da-a294-49df-ac99-a25d661b2798" in namespace "secrets-4671" to be "success or failure" Mar 12 21:16:36.866: INFO: Pod "pod-secrets-968922da-a294-49df-ac99-a25d661b2798": Phase="Pending", Reason="", readiness=false. Elapsed: 18.827318ms Mar 12 21:16:38.870: INFO: Pod "pod-secrets-968922da-a294-49df-ac99-a25d661b2798": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022510539s STEP: Saw pod success Mar 12 21:16:38.870: INFO: Pod "pod-secrets-968922da-a294-49df-ac99-a25d661b2798" satisfied condition "success or failure" Mar 12 21:16:38.872: INFO: Trying to get logs from node jerma-worker pod pod-secrets-968922da-a294-49df-ac99-a25d661b2798 container secret-volume-test: STEP: delete the pod Mar 12 21:16:38.895: INFO: Waiting for pod pod-secrets-968922da-a294-49df-ac99-a25d661b2798 to disappear Mar 12 21:16:38.918: INFO: Pod pod-secrets-968922da-a294-49df-ac99-a25d661b2798 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:38.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4671" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":874,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:38.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:16:39.620: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:16:42.660: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:16:42.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3659-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-416" for this suite. STEP: Destroying namespace "webhook-416-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":57,"skipped":875,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:43.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3e685a5d-5762-4221-8b3d-4fcb3c3f18bf STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3e685a5d-5762-4221-8b3d-4fcb3c3f18bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:48.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8066" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":887,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:48.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 21:16:48.309: INFO: Waiting up to 5m0s for pod "pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2" in namespace "emptydir-4180" to be "success or failure" Mar 12 21:16:48.315: INFO: Pod "pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254303ms Mar 12 21:16:50.319: INFO: Pod "pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01005338s STEP: Saw pod success Mar 12 21:16:50.319: INFO: Pod "pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2" satisfied condition "success or failure" Mar 12 21:16:50.321: INFO: Trying to get logs from node jerma-worker pod pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2 container test-container: STEP: delete the pod Mar 12 21:16:50.338: INFO: Waiting for pod pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2 to disappear Mar 12 21:16:50.349: INFO: Pod pod-160ecb2b-186f-42c3-a2da-1cf3b66fd1c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:16:50.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4180" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":903,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:16:50.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 12 21:16:50.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:50.474: INFO: Number of nodes with available pods: 0 Mar 12 21:16:50.475: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:16:51.479: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:51.482: INFO: Number of nodes with available pods: 0 Mar 12 21:16:51.482: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:16:52.478: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:52.480: INFO: Number of nodes with available pods: 1 Mar 12 21:16:52.480: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:53.478: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:53.480: INFO: Number of nodes with available pods: 2 Mar 12 21:16:53.480: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 12 21:16:53.493: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:53.496: INFO: Number of nodes with available pods: 1 Mar 12 21:16:53.496: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:54.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:54.502: INFO: Number of nodes with available pods: 1 Mar 12 21:16:54.502: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:55.501: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:55.504: INFO: Number of nodes with available pods: 1 Mar 12 21:16:55.504: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:56.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:56.500: INFO: Number of nodes with available pods: 1 Mar 12 21:16:56.500: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:57.501: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:57.504: INFO: Number of nodes with available pods: 1 Mar 12 21:16:57.504: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:58.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:58.514: INFO: Number of nodes with available pods: 1 Mar 12 21:16:58.514: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:16:59.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:16:59.502: INFO: Number of nodes with available pods: 1 Mar 12 21:16:59.502: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:00.524: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:00.526: INFO: Number of nodes with available pods: 1 Mar 12 21:17:00.526: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:01.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:01.502: INFO: Number of nodes with available pods: 1 Mar 12 21:17:01.502: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:02.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:02.502: INFO: Number of nodes with available pods: 1 Mar 12 21:17:02.502: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:03.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:03.504: INFO: Number of nodes with available pods: 1 Mar 12 21:17:03.504: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:04.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:04.503: INFO: Number of nodes with available pods: 1 Mar 12 21:17:04.503: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:05.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:05.501: INFO: Number of nodes with available pods: 1 Mar 12 21:17:05.501: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:06.514: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:06.517: INFO: Number of nodes with available pods: 1 Mar 12 21:17:06.517: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:17:07.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:17:07.503: INFO: Number of nodes with available pods: 2 Mar 12 21:17:07.503: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4731, will wait for the garbage collector to delete the pods Mar 12 21:17:07.563: INFO: Deleting DaemonSet.extensions daemon-set took: 5.285445ms Mar 12 21:17:07.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.215084ms Mar 12 21:17:16.065: INFO: Number of nodes with available pods: 0 Mar 12 21:17:16.065: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 21:17:16.092: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4731/daemonsets","resourceVersion":"1238300"},"items":null} Mar 12 21:17:16.095: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4731/pods","resourceVersion":"1238300"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:17:16.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4731" for this suite. • [SLOW TEST:25.755 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":60,"skipped":911,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:17:16.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:17:16.161: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:17:16.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1590" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":61,"skipped":931,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:17:16.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 12 21:17:16.891: INFO: Waiting up to 5m0s for pod "var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b" in namespace "var-expansion-1320" to be "success or failure" Mar 12 21:17:16.894: INFO: Pod "var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.105368ms Mar 12 21:17:18.901: INFO: Pod "var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009712338s STEP: Saw pod success Mar 12 21:17:18.901: INFO: Pod "var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b" satisfied condition "success or failure" Mar 12 21:17:18.903: INFO: Trying to get logs from node jerma-worker pod var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b container dapi-container: STEP: delete the pod Mar 12 21:17:18.933: INFO: Waiting for pod var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b to disappear Mar 12 21:17:18.937: INFO: Pod var-expansion-e9a740f2-de5b-47a0-9c7a-2f5b73e9d25b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:17:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1320" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":932,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:17:18.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 21:17:19.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5287' Mar 12 21:17:19.117: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 21:17:19.117: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Mar 12 21:17:23.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5287' Mar 12 21:17:23.248: INFO: stderr: "" Mar 12 21:17:23.248: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:17:23.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5287" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":63,"skipped":936,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:17:23.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 21:17:25.382: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:17:25.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8333" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":948,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:17:25.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-446, will wait for the garbage collector to delete the pods Mar 12 21:17:27.555: INFO: Deleting Job.batch foo took: 4.429458ms Mar 12 21:17:27.656: INFO: Terminating Job.batch foo pods took: 100.23915ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-446" for this suite. • [SLOW TEST:40.720 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":65,"skipped":970,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:06.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 21:18:06.247: INFO: Waiting up to 5m0s for pod "pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e" in namespace "emptydir-7856" to be "success or failure" Mar 12 21:18:06.263: INFO: Pod "pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.725664ms Mar 12 21:18:08.265: INFO: Pod "pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018096184s Mar 12 21:18:10.268: INFO: Pod "pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021341877s STEP: Saw pod success Mar 12 21:18:10.268: INFO: Pod "pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e" satisfied condition "success or failure" Mar 12 21:18:10.271: INFO: Trying to get logs from node jerma-worker pod pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e container test-container: STEP: delete the pod Mar 12 21:18:10.300: INFO: Waiting for pod pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e to disappear Mar 12 21:18:10.305: INFO: Pod pod-ba6bd2d8-6d95-41dd-bebf-b9348b7aa92e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:10.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7856" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":971,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:10.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 12 21:18:10.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5234' Mar 12 21:18:10.642: INFO: stderr: "" Mar 12 21:18:10.642: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 21:18:11.645: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:18:11.645: INFO: Found 0 / 1 Mar 12 21:18:12.646: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:18:12.646: INFO: Found 1 / 1 Mar 12 21:18:12.646: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 12 21:18:12.649: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:18:12.649: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 21:18:12.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-4zmm4 --namespace=kubectl-5234 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 12 21:18:12.768: INFO: stderr: "" Mar 12 21:18:12.768: INFO: stdout: "pod/agnhost-master-4zmm4 patched\n" STEP: checking annotations Mar 12 21:18:12.782: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:18:12.782: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:12.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5234" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":67,"skipped":986,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:12.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:18:13.363: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:18:15.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644693, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644693, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644693, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644693, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:18:18.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:18.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-802" for this suite. STEP: Destroying namespace "webhook-802-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.951 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":68,"skipped":990,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:18.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:18:18.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b" in namespace "downward-api-7559" to be "success or failure" Mar 12 21:18:18.790: INFO: Pod "downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.787548ms Mar 12 21:18:20.793: INFO: Pod "downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008063457s STEP: Saw pod success Mar 12 21:18:20.794: INFO: Pod "downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b" satisfied condition "success or failure" Mar 12 21:18:20.796: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b container client-container: STEP: delete the pod Mar 12 21:18:20.816: INFO: Waiting for pod downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b to disappear Mar 12 21:18:20.820: INFO: Pod downwardapi-volume-331c1ab9-e950-4adb-8be4-5dbd52a4707b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:20.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7559" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":991,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:20.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:18:20.906: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 21:18:22.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6493 create -f -' Mar 12 21:18:24.724: INFO: stderr: "" Mar 12 21:18:24.724: INFO: stdout: "e2e-test-crd-publish-openapi-443-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 12 21:18:24.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6493 delete e2e-test-crd-publish-openapi-443-crds test-cr' Mar 12 21:18:24.859: INFO: stderr: "" Mar 12 21:18:24.859: INFO: stdout: "e2e-test-crd-publish-openapi-443-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 12 21:18:24.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6493 apply -f -' Mar 12 21:18:25.086: INFO: stderr: "" Mar 12 21:18:25.086: INFO: stdout: "e2e-test-crd-publish-openapi-443-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 12 21:18:25.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6493 delete e2e-test-crd-publish-openapi-443-crds test-cr' Mar 12 21:18:25.185: INFO: stderr: "" Mar 12 21:18:25.185: INFO: stdout: "e2e-test-crd-publish-openapi-443-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 21:18:25.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-443-crds' Mar 12 21:18:25.383: INFO: stderr: "" Mar 12 21:18:25.383: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-443-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6493" for this suite. • [SLOW TEST:6.304 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":70,"skipped":1010,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:27.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 12 21:18:27.180: INFO: Waiting up to 5m0s for pod "pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9" in namespace "emptydir-4149" to be "success or failure" Mar 12 21:18:27.184: INFO: Pod "pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301849ms Mar 12 21:18:29.188: INFO: Pod "pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008394203s Mar 12 21:18:31.192: INFO: Pod "pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012011695s STEP: Saw pod success Mar 12 21:18:31.192: INFO: Pod "pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9" satisfied condition "success or failure" Mar 12 21:18:31.195: INFO: Trying to get logs from node jerma-worker pod pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9 container test-container: STEP: delete the pod Mar 12 21:18:31.212: INFO: Waiting for pod pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9 to disappear Mar 12 21:18:31.216: INFO: Pod pod-f4682687-5c8f-4ea4-bb8f-5f03db2633f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:31.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4149" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:31.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:18:31.265: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-129" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1038,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:33.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-2q6k STEP: Creating a pod to test atomic-volume-subpath Mar 12 21:18:33.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2q6k" in namespace "subpath-6977" to be "success or failure" Mar 12 21:18:33.447: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123481ms Mar 12 21:18:35.450: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016441099s Mar 12 21:18:37.453: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 4.020130708s Mar 12 21:18:39.456: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 6.023041329s Mar 12 21:18:41.460: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 8.026551093s Mar 12 21:18:43.463: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 10.029645409s Mar 12 21:18:45.500: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 12.066670013s Mar 12 21:18:47.503: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 14.070049884s Mar 12 21:18:49.507: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 16.073608703s Mar 12 21:18:51.511: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 18.077340425s Mar 12 21:18:53.514: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Running", Reason="", readiness=true. Elapsed: 20.080643333s Mar 12 21:18:55.517: INFO: Pod "pod-subpath-test-secret-2q6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.083737034s STEP: Saw pod success Mar 12 21:18:55.517: INFO: Pod "pod-subpath-test-secret-2q6k" satisfied condition "success or failure" Mar 12 21:18:55.519: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-2q6k container test-container-subpath-secret-2q6k: STEP: delete the pod Mar 12 21:18:55.548: INFO: Waiting for pod pod-subpath-test-secret-2q6k to disappear Mar 12 21:18:55.562: INFO: Pod pod-subpath-test-secret-2q6k no longer exists STEP: Deleting pod pod-subpath-test-secret-2q6k Mar 12 21:18:55.562: INFO: Deleting pod "pod-subpath-test-secret-2q6k" in namespace "subpath-6977" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:55.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6977" for this suite. • [SLOW TEST:22.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":73,"skipped":1054,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:55.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:18:55.653: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b604e101-3bb0-45a9-becc-802bb3a839e5" in namespace "security-context-test-2713" to be "success or failure" Mar 12 21:18:55.672: INFO: Pod "busybox-readonly-false-b604e101-3bb0-45a9-becc-802bb3a839e5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.975377ms Mar 12 21:18:57.676: INFO: Pod "busybox-readonly-false-b604e101-3bb0-45a9-becc-802bb3a839e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022741897s Mar 12 21:18:57.676: INFO: Pod "busybox-readonly-false-b604e101-3bb0-45a9-becc-802bb3a839e5" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:57.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2713" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:57.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 12 21:18:57.804: INFO: Waiting up to 5m0s for pod "pod-93c36be6-185f-4f55-bbd9-71d10944268f" in namespace "emptydir-9840" to be "success or failure" Mar 12 21:18:57.820: INFO: Pod "pod-93c36be6-185f-4f55-bbd9-71d10944268f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.983343ms Mar 12 21:18:59.825: INFO: Pod "pod-93c36be6-185f-4f55-bbd9-71d10944268f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020825301s STEP: Saw pod success Mar 12 21:18:59.825: INFO: Pod "pod-93c36be6-185f-4f55-bbd9-71d10944268f" satisfied condition "success or failure" Mar 12 21:18:59.828: INFO: Trying to get logs from node jerma-worker2 pod pod-93c36be6-185f-4f55-bbd9-71d10944268f container test-container: STEP: delete the pod Mar 12 21:18:59.866: INFO: Waiting for pod pod-93c36be6-185f-4f55-bbd9-71d10944268f to disappear Mar 12 21:18:59.874: INFO: Pod pod-93c36be6-185f-4f55-bbd9-71d10944268f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:18:59.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9840" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1089,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:18:59.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:19:00.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:19:02.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644740, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644740, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644740, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644740, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:19:05.661: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:19:05.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5185" for this suite. STEP: Destroying namespace "webhook-5185-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.079 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":76,"skipped":1168,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:19:05.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:19:08.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-410" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1180,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:19:08.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-8ca0f6de-54b7-496c-8ab7-0687774524ca in namespace container-probe-719 Mar 12 21:19:10.166: INFO: Started pod busybox-8ca0f6de-54b7-496c-8ab7-0687774524ca in namespace container-probe-719 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 21:19:10.169: INFO: Initial restart count of pod busybox-8ca0f6de-54b7-496c-8ab7-0687774524ca is 0 Mar 12 21:19:56.800: INFO: Restart count of pod container-probe-719/busybox-8ca0f6de-54b7-496c-8ab7-0687774524ca is now 1 (46.630870714s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:19:56.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-719" for this suite. • [SLOW TEST:48.778 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1201,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:19:56.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:19:57.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:19:59.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644797, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719644797, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:20:02.672: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:20:02.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4671-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:03.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9273" for this suite. STEP: Destroying namespace "webhook-9273-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":79,"skipped":1203,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:03.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-47477c3e-1100-48a0-9c53-348fc431c245 STEP: Creating a pod to test consume secrets Mar 12 21:20:03.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2" in namespace "projected-8495" to be "success or failure" Mar 12 21:20:03.978: INFO: Pod "pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.782778ms Mar 12 21:20:05.983: INFO: Pod "pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016053159s STEP: Saw pod success Mar 12 21:20:05.983: INFO: Pod "pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2" satisfied condition "success or failure" Mar 12 21:20:05.986: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2 container projected-secret-volume-test: STEP: delete the pod Mar 12 21:20:06.010: INFO: Waiting for pod pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2 to disappear Mar 12 21:20:06.014: INFO: Pod pod-projected-secrets-3b27be22-9eb4-4482-9149-9f4b40d41db2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:06.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8495" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:06.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:20:06.115: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 12 21:20:08.147: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:09.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3494" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":81,"skipped":1219,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:09.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:20:09.217: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:15.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3260" for this suite. • [SLOW TEST:5.988 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":82,"skipped":1229,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:15.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:20:15.221: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 12 21:20:15.229: INFO: Number of nodes with available pods: 0 Mar 12 21:20:15.229: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 12 21:20:15.299: INFO: Number of nodes with available pods: 0 Mar 12 21:20:15.299: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:16.301: INFO: Number of nodes with available pods: 0 Mar 12 21:20:16.301: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:17.302: INFO: Number of nodes with available pods: 1 Mar 12 21:20:17.302: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 12 21:20:17.329: INFO: Number of nodes with available pods: 1 Mar 12 21:20:17.329: INFO: Number of running nodes: 0, number of available pods: 1 Mar 12 21:20:18.332: INFO: Number of nodes with available pods: 0 Mar 12 21:20:18.332: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 12 21:20:18.345: INFO: Number of nodes with available pods: 0 Mar 12 21:20:18.345: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:19.348: INFO: Number of nodes with available pods: 0 Mar 12 21:20:19.348: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:20.351: INFO: Number of nodes with available pods: 0 Mar 12 21:20:20.351: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:21.348: INFO: Number of nodes with available pods: 0 Mar 12 21:20:21.348: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:22.348: INFO: Number of nodes with available pods: 0 Mar 12 21:20:22.348: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:20:23.348: INFO: Number of nodes with available pods: 1 Mar 12 21:20:23.348: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1614, will wait for the garbage collector to delete the pods Mar 12 21:20:23.410: INFO: Deleting DaemonSet.extensions daemon-set took: 5.004081ms Mar 12 21:20:23.510: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.228638ms Mar 12 21:20:26.713: INFO: Number of nodes with available pods: 0 Mar 12 21:20:26.713: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 21:20:26.715: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1614/daemonsets","resourceVersion":"1239750"},"items":null} Mar 12 21:20:26.717: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1614/pods","resourceVersion":"1239750"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:26.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1614" for this suite. • [SLOW TEST:11.595 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":83,"skipped":1247,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:26.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 12 21:20:26.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 12 21:20:26.872: INFO: stderr: "" Mar 12 21:20:26.872: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:20:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9018" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":84,"skipped":1265,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:20:26.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c587bab0-8c1a-4a22-a3b4-e2213af47cad STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c587bab0-8c1a-4a22-a3b4-e2213af47cad STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:22:01.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3605" for this suite. • [SLOW TEST:94.799 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1267,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:22:01.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 12 21:22:04.285: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a" Mar 12 21:22:04.285: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a" in namespace "pods-7777" to be "terminated due to deadline exceeded" Mar 12 21:22:04.293: INFO: Pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a": Phase="Running", Reason="", readiness=true. Elapsed: 7.990446ms Mar 12 21:22:06.297: INFO: Pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a": Phase="Running", Reason="", readiness=true. Elapsed: 2.011901662s Mar 12 21:22:08.300: INFO: Pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.01524523s Mar 12 21:22:08.300: INFO: Pod "pod-update-activedeadlineseconds-5918917f-cbe6-45b9-8e40-e986d75a1d1a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:22:08.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7777" for this suite. • [SLOW TEST:6.629 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:22:08.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-eb4a0bb2-ca9b-4792-ab76-ecccc3c61a5f STEP: Creating secret with name s-test-opt-upd-53156bd2-d7a4-4360-9375-b17bc1dc43bf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-eb4a0bb2-ca9b-4792-ab76-ecccc3c61a5f STEP: Updating secret s-test-opt-upd-53156bd2-d7a4-4360-9375-b17bc1dc43bf STEP: Creating secret with name s-test-opt-create-72eb4b26-8c3c-4d24-821c-3ba006bcea3e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:16.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1789" for this suite. • [SLOW TEST:68.478 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:16.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-4197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4197 to expose endpoints map[] Mar 12 21:23:16.896: INFO: Get endpoints failed (13.06582ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 12 21:23:17.899: INFO: successfully validated that service multi-endpoint-test in namespace services-4197 exposes endpoints map[] (1.016075639s elapsed) STEP: Creating pod pod1 in namespace services-4197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4197 to expose endpoints map[pod1:[100]] Mar 12 21:23:19.944: INFO: successfully validated that service multi-endpoint-test in namespace services-4197 exposes endpoints map[pod1:[100]] (2.038272292s elapsed) STEP: Creating pod pod2 in namespace services-4197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4197 to expose endpoints map[pod1:[100] pod2:[101]] Mar 12 21:23:21.998: INFO: successfully validated that service multi-endpoint-test in namespace services-4197 exposes endpoints map[pod1:[100] pod2:[101]] (2.051424239s elapsed) STEP: Deleting pod pod1 in namespace services-4197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4197 to expose endpoints map[pod2:[101]] Mar 12 21:23:22.054: INFO: successfully validated that service multi-endpoint-test in namespace services-4197 exposes endpoints map[pod2:[101]] (51.94543ms elapsed) STEP: Deleting pod pod2 in namespace services-4197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4197 to expose endpoints map[] Mar 12 21:23:22.091: INFO: successfully validated that service multi-endpoint-test in namespace services-4197 exposes endpoints map[] (28.686728ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:22.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4197" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.430 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":88,"skipped":1325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:22.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:35.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8045" for this suite. • [SLOW TEST:13.170 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":89,"skipped":1355,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:35.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 12 21:23:35.444: INFO: Created pod &Pod{ObjectMeta:{dns-6032 dns-6032 /api/v1/namespaces/dns-6032/pods/dns-6032 6c60be9a-59f7-4f11-8742-96b37d5fb96f 1240500 0 2020-03-12 21:23:35 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6kcdh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6kcdh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6kcdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 12 21:23:37.572: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6032 PodName:dns-6032 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:23:37.572: INFO: >>> kubeConfig: /root/.kube/config I0312 21:23:37.609857 6 log.go:172] (0xc00411c420) (0xc001db6640) Create stream I0312 21:23:37.609889 6 log.go:172] (0xc00411c420) (0xc001db6640) Stream added, broadcasting: 1 I0312 21:23:37.612669 6 log.go:172] (0xc00411c420) Reply frame received for 1 I0312 21:23:37.612721 6 log.go:172] (0xc00411c420) (0xc001e3c000) Create stream I0312 21:23:37.612747 6 log.go:172] (0xc00411c420) (0xc001e3c000) Stream added, broadcasting: 3 I0312 21:23:37.613897 6 log.go:172] (0xc00411c420) Reply frame received for 3 I0312 21:23:37.613936 6 log.go:172] (0xc00411c420) (0xc001db66e0) Create stream I0312 21:23:37.613949 6 log.go:172] (0xc00411c420) (0xc001db66e0) Stream added, broadcasting: 5 I0312 21:23:37.614919 6 log.go:172] (0xc00411c420) Reply frame received for 5 I0312 21:23:37.678857 6 log.go:172] (0xc00411c420) Data frame received for 3 I0312 21:23:37.678883 6 log.go:172] (0xc001e3c000) (3) Data frame handling I0312 21:23:37.678902 6 log.go:172] (0xc001e3c000) (3) Data frame sent I0312 21:23:37.679679 6 log.go:172] (0xc00411c420) Data frame received for 5 I0312 21:23:37.679703 6 log.go:172] (0xc001db66e0) (5) Data frame handling I0312 21:23:37.679964 6 log.go:172] (0xc00411c420) Data frame received for 3 I0312 21:23:37.679987 6 log.go:172] (0xc001e3c000) (3) Data frame handling I0312 21:23:37.681728 6 log.go:172] (0xc00411c420) Data frame received for 1 I0312 21:23:37.681755 6 log.go:172] (0xc001db6640) (1) Data frame handling I0312 21:23:37.681773 6 log.go:172] (0xc001db6640) (1) Data frame sent I0312 21:23:37.681786 6 log.go:172] (0xc00411c420) (0xc001db6640) Stream removed, broadcasting: 1 I0312 21:23:37.681813 6 log.go:172] (0xc00411c420) Go away received I0312 21:23:37.681909 6 log.go:172] (0xc00411c420) (0xc001db6640) Stream removed, broadcasting: 1 I0312 21:23:37.681925 6 log.go:172] (0xc00411c420) (0xc001e3c000) Stream removed, broadcasting: 3 I0312 21:23:37.681931 6 log.go:172] (0xc00411c420) (0xc001db66e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 12 21:23:37.681: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6032 PodName:dns-6032 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:23:37.681: INFO: >>> kubeConfig: /root/.kube/config I0312 21:23:37.712003 6 log.go:172] (0xc00411ca50) (0xc001db6960) Create stream I0312 21:23:37.712030 6 log.go:172] (0xc00411ca50) (0xc001db6960) Stream added, broadcasting: 1 I0312 21:23:37.724514 6 log.go:172] (0xc00411ca50) Reply frame received for 1 I0312 21:23:37.724556 6 log.go:172] (0xc00411ca50) (0xc001e3c140) Create stream I0312 21:23:37.724571 6 log.go:172] (0xc00411ca50) (0xc001e3c140) Stream added, broadcasting: 3 I0312 21:23:37.725359 6 log.go:172] (0xc00411ca50) Reply frame received for 3 I0312 21:23:37.725391 6 log.go:172] (0xc00411ca50) (0xc001d34640) Create stream I0312 21:23:37.725408 6 log.go:172] (0xc00411ca50) (0xc001d34640) Stream added, broadcasting: 5 I0312 21:23:37.726104 6 log.go:172] (0xc00411ca50) Reply frame received for 5 I0312 21:23:37.794934 6 log.go:172] (0xc00411ca50) Data frame received for 3 I0312 21:23:37.794960 6 log.go:172] (0xc001e3c140) (3) Data frame handling I0312 21:23:37.794985 6 log.go:172] (0xc001e3c140) (3) Data frame sent I0312 21:23:37.795776 6 log.go:172] (0xc00411ca50) Data frame received for 3 I0312 21:23:37.795790 6 log.go:172] (0xc001e3c140) (3) Data frame handling I0312 21:23:37.795909 6 log.go:172] (0xc00411ca50) Data frame received for 5 I0312 21:23:37.795926 6 log.go:172] (0xc001d34640) (5) Data frame handling I0312 21:23:37.796680 6 log.go:172] (0xc00411ca50) Data frame received for 1 I0312 21:23:37.796698 6 log.go:172] (0xc001db6960) (1) Data frame handling I0312 21:23:37.796733 6 log.go:172] (0xc001db6960) (1) Data frame sent I0312 21:23:37.796745 6 log.go:172] (0xc00411ca50) (0xc001db6960) Stream removed, broadcasting: 1 I0312 21:23:37.796760 6 log.go:172] (0xc00411ca50) Go away received I0312 21:23:37.796820 6 log.go:172] (0xc00411ca50) (0xc001db6960) Stream removed, broadcasting: 1 I0312 21:23:37.796833 6 log.go:172] (0xc00411ca50) (0xc001e3c140) Stream removed, broadcasting: 3 I0312 21:23:37.796839 6 log.go:172] (0xc00411ca50) (0xc001d34640) Stream removed, broadcasting: 5 Mar 12 21:23:37.796: INFO: Deleting pod dns-6032... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:37.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6032" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":90,"skipped":1365,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:37.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 21:23:37.883: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:40.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5451" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":91,"skipped":1375,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:40.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0828b4b5-8c49-403f-b326-3f4024fd1fac STEP: Creating a pod to test consume configMaps Mar 12 21:23:40.994: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2" in namespace "projected-9569" to be "success or failure" Mar 12 21:23:41.001: INFO: Pod "pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594796ms Mar 12 21:23:43.007: INFO: Pod "pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012233015s STEP: Saw pod success Mar 12 21:23:43.007: INFO: Pod "pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2" satisfied condition "success or failure" Mar 12 21:23:43.008: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2 container projected-configmap-volume-test: STEP: delete the pod Mar 12 21:23:43.026: INFO: Waiting for pod pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2 to disappear Mar 12 21:23:43.031: INFO: Pod pod-projected-configmaps-fbffa883-358c-48e4-bae5-5adcb0450df2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:43.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9569" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:43.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:47.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4730" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:47.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 21:23:47.202: INFO: Waiting up to 5m0s for pod "downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d" in namespace "downward-api-6993" to be "success or failure" Mar 12 21:23:47.216: INFO: Pod "downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.993928ms Mar 12 21:23:49.220: INFO: Pod "downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017982033s STEP: Saw pod success Mar 12 21:23:49.220: INFO: Pod "downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d" satisfied condition "success or failure" Mar 12 21:23:49.223: INFO: Trying to get logs from node jerma-worker2 pod downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d container dapi-container: STEP: delete the pod Mar 12 21:23:49.264: INFO: Waiting for pod downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d to disappear Mar 12 21:23:49.286: INFO: Pod downward-api-d1d611d0-a1ca-42c4-8636-3f3ba5f4076d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:23:49.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6993" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1444,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:23:49.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 12 21:23:49.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:23:49.361: INFO: Number of nodes with available pods: 0 Mar 12 21:23:49.361: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:23:50.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:23:50.369: INFO: Number of nodes with available pods: 0 Mar 12 21:23:50.369: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:23:51.365: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:23:51.368: INFO: Number of nodes with available pods: 2 Mar 12 21:23:51.368: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 12 21:23:51.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:23:51.409: INFO: Number of nodes with available pods: 2 Mar 12 21:23:51.409: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6688, will wait for the garbage collector to delete the pods Mar 12 21:23:52.499: INFO: Deleting DaemonSet.extensions daemon-set took: 4.552696ms Mar 12 21:23:52.799: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.227172ms Mar 12 21:24:06.106: INFO: Number of nodes with available pods: 0 Mar 12 21:24:06.107: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 21:24:06.109: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6688/daemonsets","resourceVersion":"1240776"},"items":null} Mar 12 21:24:06.111: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6688/pods","resourceVersion":"1240776"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:06.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6688" for this suite. • [SLOW TEST:16.830 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":95,"skipped":1458,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:06.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9j6p2 in namespace proxy-8155 I0312 21:24:06.201817 6 runners.go:189] Created replication controller with name: proxy-service-9j6p2, namespace: proxy-8155, replica count: 1 I0312 21:24:07.252195 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0312 21:24:08.252398 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:09.252605 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:10.252828 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:11.253032 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:12.253257 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:13.253443 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:14.253654 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0312 21:24:15.253827 6 runners.go:189] proxy-service-9j6p2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 21:24:15.269: INFO: setup took 9.105802049s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 12 21:24:15.277: INFO: (0) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 8.279016ms) Mar 12 21:24:15.279: INFO: (0) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 9.969747ms) Mar 12 21:24:15.279: INFO: (0) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 10.068646ms) Mar 12 21:24:15.279: INFO: (0) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 10.138155ms) Mar 12 21:24:15.279: INFO: (0) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 10.355394ms) Mar 12 21:24:15.279: INFO: (0) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 10.425374ms) Mar 12 21:24:15.280: INFO: (0) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 11.396132ms) Mar 12 21:24:15.280: INFO: (0) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 11.383735ms) Mar 12 21:24:15.283: INFO: (0) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 14.409446ms) Mar 12 21:24:15.283: INFO: (0) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 14.075976ms) Mar 12 21:24:15.284: INFO: (0) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 15.031912ms) Mar 12 21:24:15.286: INFO: (0) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 17.548545ms) Mar 12 21:24:15.287: INFO: (0) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 17.78025ms) Mar 12 21:24:15.289: INFO: (0) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 19.88952ms) Mar 12 21:24:15.289: INFO: (0) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 19.910123ms) Mar 12 21:24:15.291: INFO: (0) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 6.780928ms) Mar 12 21:24:15.298: INFO: (1) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 7.123694ms) Mar 12 21:24:15.298: INFO: (1) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 7.201535ms) Mar 12 21:24:15.298: INFO: (1) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 7.265448ms) Mar 12 21:24:15.299: INFO: (1) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 7.54742ms) Mar 12 21:24:15.299: INFO: (1) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 7.846765ms) Mar 12 21:24:15.299: INFO: (1) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 5.739857ms) Mar 12 21:24:15.306: INFO: (2) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 6.114509ms) Mar 12 21:24:15.307: INFO: (2) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 6.163719ms) Mar 12 21:24:15.307: INFO: (2) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 6.230398ms) Mar 12 21:24:15.307: INFO: (2) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 6.243486ms) Mar 12 21:24:15.307: INFO: (2) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 6.513962ms) Mar 12 21:24:15.307: INFO: (2) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 9.331811ms) Mar 12 21:24:15.318: INFO: (3) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 9.621804ms) Mar 12 21:24:15.319: INFO: (3) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 11.08931ms) Mar 12 21:24:15.319: INFO: (3) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 11.05743ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 11.363205ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 11.325417ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 11.382277ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 11.437424ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 11.358432ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 11.384249ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 11.43019ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 11.483777ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 11.472302ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 11.471411ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 11.503175ms) Mar 12 21:24:15.320: INFO: (3) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 5.346103ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 6.059652ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 6.153601ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 6.199648ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 6.204605ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 6.2856ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 6.312756ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 6.344726ms) Mar 12 21:24:15.326: INFO: (4) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 6.307017ms) Mar 12 21:24:15.329: INFO: (5) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 2.615166ms) Mar 12 21:24:15.329: INFO: (5) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 2.64132ms) Mar 12 21:24:15.329: INFO: (5) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 2.597705ms) Mar 12 21:24:15.329: INFO: (5) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 2.661908ms) Mar 12 21:24:15.329: INFO: (5) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.113982ms) Mar 12 21:24:15.330: INFO: (5) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.283167ms) Mar 12 21:24:15.330: INFO: (5) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.306455ms) Mar 12 21:24:15.330: INFO: (5) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.511261ms) Mar 12 21:24:15.330: INFO: (5) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.862436ms) Mar 12 21:24:15.330: INFO: (5) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 3.490161ms) Mar 12 21:24:15.335: INFO: (6) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 3.945332ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 4.388262ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.468971ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.535037ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 4.608621ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.76328ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.778758ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 4.847835ms) Mar 12 21:24:15.336: INFO: (6) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 4.782315ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 2.997007ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 3.074367ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.098392ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.10528ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.207319ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 3.290694ms) Mar 12 21:24:15.340: INFO: (7) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test (200; 3.833395ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.385144ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 4.579071ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.646938ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 4.64405ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 4.690887ms) Mar 12 21:24:15.341: INFO: (7) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.788158ms) Mar 12 21:24:15.344: INFO: (8) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.138422ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 3.76114ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.743549ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.829632ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 3.786994ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.815699ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.859026ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.816389ms) Mar 12 21:24:15.345: INFO: (8) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 3.908047ms) Mar 12 21:24:15.346: INFO: (8) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 4.314464ms) Mar 12 21:24:15.346: INFO: (8) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 4.303186ms) Mar 12 21:24:15.346: INFO: (8) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 4.29679ms) Mar 12 21:24:15.346: INFO: (8) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.27377ms) Mar 12 21:24:15.346: INFO: (8) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.382717ms) Mar 12 21:24:15.349: INFO: (9) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.167571ms) Mar 12 21:24:15.349: INFO: (9) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.458748ms) Mar 12 21:24:15.349: INFO: (9) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 3.559422ms) Mar 12 21:24:15.349: INFO: (9) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.576571ms) Mar 12 21:24:15.350: INFO: (9) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.75403ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.931058ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 5.011863ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 5.011962ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 5.001791ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 5.117938ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 5.350704ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 5.457707ms) Mar 12 21:24:15.351: INFO: (9) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 5.595662ms) Mar 12 21:24:15.354: INFO: (10) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 2.515897ms) Mar 12 21:24:15.355: INFO: (10) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.289606ms) Mar 12 21:24:15.355: INFO: (10) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.506256ms) Mar 12 21:24:15.355: INFO: (10) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.474251ms) Mar 12 21:24:15.356: INFO: (10) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.132762ms) Mar 12 21:24:15.356: INFO: (10) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 4.7555ms) Mar 12 21:24:15.357: INFO: (10) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 5.186755ms) Mar 12 21:24:15.357: INFO: (10) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 5.955759ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 6.309259ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 6.295939ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 6.457529ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 6.491574ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 6.482715ms) Mar 12 21:24:15.358: INFO: (10) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 6.5668ms) Mar 12 21:24:15.361: INFO: (11) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.049773ms) Mar 12 21:24:15.361: INFO: (11) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 3.329713ms) Mar 12 21:24:15.361: INFO: (11) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 3.394218ms) Mar 12 21:24:15.362: INFO: (11) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.499183ms) Mar 12 21:24:15.362: INFO: (11) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.492221ms) Mar 12 21:24:15.362: INFO: (11) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.63195ms) Mar 12 21:24:15.362: INFO: (11) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.596194ms) Mar 12 21:24:15.362: INFO: (11) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.742016ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 4.622855ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 4.623153ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 4.672125ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.710386ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.678366ms) Mar 12 21:24:15.363: INFO: (11) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 4.768061ms) Mar 12 21:24:15.365: INFO: (12) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test (200; 5.116762ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 5.204071ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 5.151018ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 5.132567ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 5.188873ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 5.13583ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 5.163072ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 5.177227ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 5.166611ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 5.238571ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 5.261752ms) Mar 12 21:24:15.368: INFO: (12) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 5.24683ms) Mar 12 21:24:15.372: INFO: (13) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.116673ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.601307ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 4.655831ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.879502ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 5.058334ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 5.13845ms) Mar 12 21:24:15.373: INFO: (13) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 5.667456ms) Mar 12 21:24:15.374: INFO: (13) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 5.744072ms) Mar 12 21:24:15.375: INFO: (13) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 6.436396ms) Mar 12 21:24:15.375: INFO: (13) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 6.595938ms) Mar 12 21:24:15.375: INFO: (13) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 6.558329ms) Mar 12 21:24:15.379: INFO: (14) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.177098ms) Mar 12 21:24:15.379: INFO: (14) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.363506ms) Mar 12 21:24:15.379: INFO: (14) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 4.45209ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 4.626683ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 4.782303ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 4.928337ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 4.854619ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.947657ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 5.036644ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 5.176405ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 5.324543ms) Mar 12 21:24:15.380: INFO: (14) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 5.313ms) Mar 12 21:24:15.381: INFO: (14) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 5.644475ms) Mar 12 21:24:15.381: INFO: (14) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 5.804167ms) Mar 12 21:24:15.381: INFO: (14) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 6.249295ms) Mar 12 21:24:15.384: INFO: (15) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 2.966141ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.331872ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.432817ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.85106ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.919613ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.9668ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 3.996789ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 4.079216ms) Mar 12 21:24:15.385: INFO: (15) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 4.052401ms) Mar 12 21:24:15.386: INFO: (15) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.094321ms) Mar 12 21:24:15.386: INFO: (15) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 4.277983ms) Mar 12 21:24:15.386: INFO: (15) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 4.288886ms) Mar 12 21:24:15.386: INFO: (15) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 4.347541ms) Mar 12 21:24:15.386: INFO: (15) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 2.512001ms) Mar 12 21:24:15.389: INFO: (16) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 2.46601ms) Mar 12 21:24:15.389: INFO: (16) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 2.499447ms) Mar 12 21:24:15.390: INFO: (16) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.880625ms) Mar 12 21:24:15.390: INFO: (16) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 3.81569ms) Mar 12 21:24:15.390: INFO: (16) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 3.894652ms) Mar 12 21:24:15.390: INFO: (16) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.881276ms) Mar 12 21:24:15.391: INFO: (16) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.983027ms) Mar 12 21:24:15.391: INFO: (16) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 5.056712ms) Mar 12 21:24:15.391: INFO: (16) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname2/proxy/: bar (200; 5.017603ms) Mar 12 21:24:15.391: INFO: (16) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 5.028914ms) Mar 12 21:24:15.394: INFO: (17) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.122514ms) Mar 12 21:24:15.394: INFO: (17) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 3.107453ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.482334ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.728951ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.682676ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.879526ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 3.939744ms) Mar 12 21:24:15.395: INFO: (17) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: ... (200; 3.576244ms) Mar 12 21:24:15.399: INFO: (18) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 3.620138ms) Mar 12 21:24:15.399: INFO: (18) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 3.623094ms) Mar 12 21:24:15.399: INFO: (18) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.689266ms) Mar 12 21:24:15.399: INFO: (18) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: test<... (200; 3.660101ms) Mar 12 21:24:15.399: INFO: (18) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.713501ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 4.075082ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 4.08062ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.074373ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.155843ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.139488ms) Mar 12 21:24:15.400: INFO: (18) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 4.131086ms) Mar 12 21:24:15.403: INFO: (19) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname1/proxy/: foo (200; 2.821786ms) Mar 12 21:24:15.403: INFO: (19) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:1080/proxy/: test<... (200; 3.013386ms) Mar 12 21:24:15.403: INFO: (19) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:1080/proxy/: ... (200; 3.159509ms) Mar 12 21:24:15.403: INFO: (19) /api/v1/namespaces/proxy-8155/services/proxy-service-9j6p2:portname1/proxy/: foo (200; 3.604373ms) Mar 12 21:24:15.403: INFO: (19) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:460/proxy/: tls baz (200; 3.568862ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/http:proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 3.86406ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/services/http:proxy-service-9j6p2:portname2/proxy/: bar (200; 4.497946ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:160/proxy/: foo (200; 4.474244ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname1/proxy/: tls baz (200; 4.495079ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6:162/proxy/: bar (200; 4.564717ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:462/proxy/: tls qux (200; 4.511754ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/services/https:proxy-service-9j6p2:tlsportname2/proxy/: tls qux (200; 4.506165ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/proxy-service-9j6p2-vskr6/proxy/: test (200; 4.542432ms) Mar 12 21:24:15.404: INFO: (19) /api/v1/namespaces/proxy-8155/pods/https:proxy-service-9j6p2-vskr6:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 21:24:28.239: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:28.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8833" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1490,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:28.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:24:28.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796" in namespace "downward-api-1457" to be "success or failure" Mar 12 21:24:28.392: INFO: Pod "downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922033ms Mar 12 21:24:30.395: INFO: Pod "downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007202711s STEP: Saw pod success Mar 12 21:24:30.395: INFO: Pod "downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796" satisfied condition "success or failure" Mar 12 21:24:30.398: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796 container client-container: STEP: delete the pod Mar 12 21:24:30.430: INFO: Waiting for pod downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796 to disappear Mar 12 21:24:30.434: INFO: Pod downwardapi-volume-8f85f50e-2d1a-43d6-af5e-2c89e6abd796 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:30.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1457" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1506,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:30.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:24:30.497: INFO: Creating deployment "webserver-deployment" Mar 12 21:24:30.500: INFO: Waiting for observed generation 1 Mar 12 21:24:32.508: INFO: Waiting for all required pods to come up Mar 12 21:24:32.510: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 12 21:24:34.517: INFO: Waiting for deployment "webserver-deployment" to complete Mar 12 21:24:34.520: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 12 21:24:34.524: INFO: Updating deployment webserver-deployment Mar 12 21:24:34.524: INFO: Waiting for observed generation 2 Mar 12 21:24:36.549: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 12 21:24:36.551: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 12 21:24:36.553: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 21:24:36.559: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 12 21:24:36.559: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 12 21:24:36.561: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 12 21:24:36.564: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 12 21:24:36.564: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 12 21:24:36.569: INFO: Updating deployment webserver-deployment Mar 12 21:24:36.569: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 12 21:24:36.596: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 12 21:24:36.608: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 21:24:36.791: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7631 /apis/apps/v1/namespaces/deployment-7631/deployments/webserver-deployment 36cf3572-3565-4c24-9f81-0274b6353c53 1241185 3 2020-03-12 21:24:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054f8478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-12 21:24:34 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-12 21:24:36 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 12 21:24:36.843: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7631 /apis/apps/v1/namespaces/deployment-7631/replicasets/webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 1241240 3 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 36cf3572-3565-4c24-9f81-0274b6353c53 0xc00313bdc7 0xc00313bdc8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00313be38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 21:24:36.843: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 12 21:24:36.844: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7631 /apis/apps/v1/namespaces/deployment-7631/replicasets/webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 1241230 3 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 36cf3572-3565-4c24-9f81-0274b6353c53 0xc00313bd07 0xc00313bd08}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00313bd68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 12 21:24:36.924: INFO: Pod "webserver-deployment-595b5b9587-24ld4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-24ld4 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-24ld4 fd1fd82e-7984-43ba-a714-c91a16cc41e2 1241205 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb02f7 0xc002bb02f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.924: INFO: Pod "webserver-deployment-595b5b9587-5qdfd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5qdfd webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-5qdfd 0170097d-6e0b-4c20-80bb-ef8494ee7f22 1241246 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0417 0xc002bb0418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.924: INFO: Pod "webserver-deployment-595b5b9587-82rfb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-82rfb webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-82rfb 826dd5ac-de02-442f-b4a3-8685d52ca452 1241212 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0577 0xc002bb0578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.924: INFO: Pod "webserver-deployment-595b5b9587-b6hbq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b6hbq webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-b6hbq 1647c432-429b-4c74-a812-345c50e44c34 1241038 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0697 0xc002bb0698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.162,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a4fc252eca4b3b3a9ee6862bc06293bcd6775bbab58ddaff31778f4b179a443,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-bs265" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bs265 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-bs265 e9555ba0-4f7b-4480-99cc-bf1db15c3e44 1241204 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0817 0xc002bb0818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-d4f4z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4f4z webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-d4f4z 8e68c797-19da-4aad-8301-2466ca7881f6 1241220 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0937 0xc002bb0938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-dm8g2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dm8g2 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-dm8g2 216b8e56-0518-4e23-9158-87b8a4e6db44 1241055 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0a57 0xc002bb0a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.163,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://295d6ef30e065ca31f2dce077b0072567a50b6591c8108aab965bd739314c9e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-gk6n7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gk6n7 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-gk6n7 78189d23-e9b0-48b2-9cac-d771e9b96a0c 1241072 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0bd7 0xc002bb0bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.152,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f5160120310c3ee28005500d48bd7e553fdf3182647dc14dcb608799e5215941,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-glpdt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-glpdt webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-glpdt a79a62e9-88dc-40b6-8811-ca4094be806b 1241050 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0d57 0xc002bb0d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.160,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://617a113bb5619031e642141fc6aea075c5c84c0f8de4495559e55606da447a92,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.925: INFO: Pod "webserver-deployment-595b5b9587-hmkgc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hmkgc webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-hmkgc 4a09e99c-078b-4bd1-a061-3d06713d00ea 1241228 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb0ed7 0xc002bb0ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-jfr5r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jfr5r webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-jfr5r 2d329547-2a4f-4f91-8472-193920c1aaa4 1241075 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1037 0xc002bb1038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.150,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0530370a427714365e0f43ea4178bb5582b89719402530a7b03654d0d99cb6cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-mv6pt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mv6pt webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-mv6pt 536632a5-eb56-45d4-8469-60415190d1f3 1241037 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb11b7 0xc002bb11b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.149,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://256de2a5c8e2ca57b799912f23715043e98b17872bd2ca468443a1088a34c547,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-mvssq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mvssq webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-mvssq 861b5631-8c5a-4ae5-b281-96c4aae974bb 1241203 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1337 0xc002bb1338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-mw526" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mw526 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-mw526 66ec0a7d-f1cc-4fcf-9558-52e663c99584 1241045 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1457 0xc002bb1458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.161,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://12e95c1b79ef1839e5ba770903fca5adf364c2a5d2eca53437cdd11ee11a4c5a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-nw4xt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nw4xt webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-nw4xt ae7de30c-d0aa-4136-a503-e243a848be74 1241217 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb15d7 0xc002bb15d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-q4pz6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q4pz6 webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-q4pz6 0334b349-ca36-49b4-befb-c40c47647a7f 1241041 0 2020-03-12 21:24:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb16f7 0xc002bb16f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.164,StartTime:2020-03-12 21:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 21:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a951441b6d08bc51fca185e0b65d7e6b14cb3eb605d031ea938bfb7b2fd6c2c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-v2znq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2znq webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-v2znq ba17b6e8-0069-4021-a27b-da98484ce370 1241207 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1877 0xc002bb1878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.926: INFO: Pod "webserver-deployment-595b5b9587-w2qdz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2qdz webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-w2qdz 42de8e96-1040-4c7d-95b1-9e14cc971841 1241201 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1997 0xc002bb1998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-595b5b9587-xzf2f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzf2f webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-xzf2f 45c42356-3e71-49e1-89b7-21afd314006b 1241187 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1ab7 0xc002bb1ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-595b5b9587-z6xdq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z6xdq webserver-deployment-595b5b9587- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-595b5b9587-z6xdq 17f07522-1610-4b88-8d7a-93a95e714978 1241213 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4e7fbe1b-6886-4daf-98e7-da3ecefebc76 0xc002bb1bd7 0xc002bb1bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-c7997dcc8-44p9r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-44p9r webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-44p9r 4cfad639-9b98-454e-9877-437271eb0c64 1241134 0 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002bb1cf7 0xc002bb1cf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:24:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-c7997dcc8-7bjtw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7bjtw webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-7bjtw 5ae6d0af-c5d3-45ad-ac68-96cabd28e1f9 1241206 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002bb1e77 0xc002bb1e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-c7997dcc8-874px" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-874px webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-874px 9a8de174-ea25-4b18-b9a4-1bbfa6a58d6e 1241133 0 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002bb1fa7 0xc002bb1fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 21:24:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.927: INFO: Pod "webserver-deployment-c7997dcc8-bfq5x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bfq5x webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-bfq5x 63818ba3-6d7d-42cf-9e2a-574474d59799 1241115 0 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002ab3567 0xc002ab3568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:24:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-c5j9f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c5j9f webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-c5j9f 32122c7d-dd9f-4c90-a63a-d2e7a73be7ff 1241236 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002ab3867 0xc002ab3868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-g2szm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g2szm webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-g2szm f93a25ff-77d6-41c4-9d96-982a7cfe7b99 1241209 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002ab3ac7 0xc002ab3ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-hbqgl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hbqgl webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-hbqgl da8b364c-076a-4f17-9878-76484b5c6890 1241124 0 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002ab3d57 0xc002ab3d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:24:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-j9dkb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j9dkb webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-j9dkb a27ae2ea-b8e3-4604-9b6e-cace9c313b95 1241216 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002ab3fe7 0xc002ab3fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-kfhtb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kfhtb webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-kfhtb 9941128e-0369-4cad-a8a0-341061819fd6 1241211 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002b8cee7 0xc002b8cee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-ktnvz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ktnvz webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-ktnvz 6c2be059-f039-4d1b-b106-4b9eea446996 1241245 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002b8d017 0xc002b8d018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 21:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-m2ktl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m2ktl webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-m2ktl 9cbb9100-2231-4268-923f-5fd2d9bd0afa 1241108 0 2020-03-12 21:24:34 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002b8d197 0xc002b8d198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-12 21:24:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.928: INFO: Pod "webserver-deployment-c7997dcc8-nkfmh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nkfmh webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-nkfmh c8742bd3-4900-45c6-a3f6-1412bd785e3c 1241214 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002b8d317 0xc002b8d318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 21:24:36.929: INFO: Pod "webserver-deployment-c7997dcc8-z6szf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z6szf webserver-deployment-c7997dcc8- deployment-7631 /api/v1/namespaces/deployment-7631/pods/webserver-deployment-c7997dcc8-z6szf 9b00281e-6346-45a3-828d-186c00009203 1241202 0 2020-03-12 21:24:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fe4a49b0-d883-4d61-9232-865553b4f809 0xc002b8d447 0xc002b8d448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75nxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75nxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75nxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:36.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7631" for this suite. • [SLOW TEST:6.678 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":99,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:37.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:24:37.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db" in namespace "downward-api-7127" to be "success or failure" Mar 12 21:24:37.482: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.887054ms Mar 12 21:24:39.484: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011221722s Mar 12 21:24:41.491: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017836096s Mar 12 21:24:43.495: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021905709s Mar 12 21:24:45.514: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04106533s STEP: Saw pod success Mar 12 21:24:45.514: INFO: Pod "downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db" satisfied condition "success or failure" Mar 12 21:24:45.516: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db container client-container: STEP: delete the pod Mar 12 21:24:45.532: INFO: Waiting for pod downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db to disappear Mar 12 21:24:45.552: INFO: Pod downwardapi-volume-8f27dce7-d207-4c4b-a539-21fe48c662db no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:45.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7127" for this suite. • [SLOW TEST:8.438 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1548,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:45.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 12 21:24:45.608: INFO: Waiting up to 5m0s for pod "client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1" in namespace "containers-8965" to be "success or failure" Mar 12 21:24:45.640: INFO: Pod "client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.232869ms Mar 12 21:24:47.643: INFO: Pod "client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035169767s Mar 12 21:24:49.646: INFO: Pod "client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038413948s STEP: Saw pod success Mar 12 21:24:49.646: INFO: Pod "client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1" satisfied condition "success or failure" Mar 12 21:24:49.654: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1 container test-container: STEP: delete the pod Mar 12 21:24:49.670: INFO: Waiting for pod client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1 to disappear Mar 12 21:24:49.675: INFO: Pod client-containers-b2e194e9-59a0-4250-b1eb-34d449ba0ee1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:24:49.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8965" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1561,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:24:49.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0312 21:25:20.369018 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 21:25:20.369: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:25:20.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9205" for this suite. • [SLOW TEST:30.691 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":102,"skipped":1580,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:25:20.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:25:31.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3488" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":103,"skipped":1588,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:25:31.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-96ba7e58-471e-474c-9f57-62dcb6ecbc57 STEP: Creating configMap with name cm-test-opt-upd-f231f13a-4f08-4598-b439-103738d74f0d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-96ba7e58-471e-474c-9f57-62dcb6ecbc57 STEP: Updating configmap cm-test-opt-upd-f231f13a-4f08-4598-b439-103738d74f0d STEP: Creating configMap with name cm-test-opt-create-6d4c0b3e-7579-4493-8182-aff947c9b578 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:04.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4676" for this suite. • [SLOW TEST:92.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:04.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:04.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7688" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:04.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-4259321c-b918-4f5b-826f-2514a2cd3393 STEP: Creating a pod to test consume secrets Mar 12 21:27:04.236: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1" in namespace "projected-9737" to be "success or failure" Mar 12 21:27:04.252: INFO: Pod "pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.81003ms Mar 12 21:27:06.255: INFO: Pod "pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019002446s STEP: Saw pod success Mar 12 21:27:06.255: INFO: Pod "pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1" satisfied condition "success or failure" Mar 12 21:27:06.257: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1 container secret-volume-test: STEP: delete the pod Mar 12 21:27:06.291: INFO: Waiting for pod pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1 to disappear Mar 12 21:27:06.295: INFO: Pod pod-projected-secrets-d9e13a13-02ed-45cf-b551-6765ae40eec1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:06.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9737" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1648,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:06.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:17.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4823" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":107,"skipped":1653,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:17.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 12 21:27:17.495: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:26.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2813" for this suite. • [SLOW TEST:8.598 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:26.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:27:26.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a" in namespace "projected-8599" to be "success or failure" Mar 12 21:27:26.110: INFO: Pod "downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.12337ms Mar 12 21:27:28.121: INFO: Pod "downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026053756s Mar 12 21:27:30.128: INFO: Pod "downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033920751s STEP: Saw pod success Mar 12 21:27:30.128: INFO: Pod "downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a" satisfied condition "success or failure" Mar 12 21:27:30.138: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a container client-container: STEP: delete the pod Mar 12 21:27:30.151: INFO: Waiting for pod downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a to disappear Mar 12 21:27:30.156: INFO: Pod downwardapi-volume-a201e74a-5085-482a-a2ba-8baf92004e5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:30.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8599" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1739,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:30.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7cb9c152-b0a7-450e-9e2a-1c6b6293c7b3 STEP: Creating a pod to test consume secrets Mar 12 21:27:30.248: INFO: Waiting up to 5m0s for pod "pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4" in namespace "secrets-4378" to be "success or failure" Mar 12 21:27:30.278: INFO: Pod "pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.022157ms Mar 12 21:27:32.283: INFO: Pod "pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034536499s STEP: Saw pod success Mar 12 21:27:32.283: INFO: Pod "pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4" satisfied condition "success or failure" Mar 12 21:27:32.286: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4 container secret-volume-test: STEP: delete the pod Mar 12 21:27:32.301: INFO: Waiting for pod pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4 to disappear Mar 12 21:27:32.321: INFO: Pod pod-secrets-98202f47-636c-40d6-bbf3-40c22c98ffc4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:32.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4378" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:32.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 21:27:32.356: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 21:27:32.376: INFO: Waiting for terminating namespaces to be deleted... Mar 12 21:27:32.378: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 21:27:32.383: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:27:32.383: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:27:32.383: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:27:32.383: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:27:32.383: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 21:27:32.386: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:27:32.386: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:27:32.386: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:27:32.386: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-33450229-f75f-4e33-b4bf-113cfd62a737 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-33450229-f75f-4e33-b4bf-113cfd62a737 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-33450229-f75f-4e33-b4bf-113cfd62a737 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:27:38.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4660" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:6.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":111,"skipped":1776,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:27:38.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:27:38.594: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 12 21:27:38.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:38.625: INFO: Number of nodes with available pods: 0 Mar 12 21:27:38.625: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:27:39.631: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:39.633: INFO: Number of nodes with available pods: 0 Mar 12 21:27:39.633: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:27:40.629: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:40.632: INFO: Number of nodes with available pods: 2 Mar 12 21:27:40.632: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 12 21:27:40.656: INFO: Wrong image for pod: daemon-set-9fhtv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:40.656: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:40.679: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:41.687: INFO: Wrong image for pod: daemon-set-9fhtv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:41.687: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:41.690: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:42.682: INFO: Wrong image for pod: daemon-set-9fhtv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:42.682: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:42.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:43.682: INFO: Wrong image for pod: daemon-set-9fhtv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:43.682: INFO: Pod daemon-set-9fhtv is not available Mar 12 21:27:43.682: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:43.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:44.682: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:44.682: INFO: Pod daemon-set-h48wp is not available Mar 12 21:27:44.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:45.682: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:45.682: INFO: Pod daemon-set-h48wp is not available Mar 12 21:27:45.684: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:46.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:46.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:46.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:47.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:47.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:47.685: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:48.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:48.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:48.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:49.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:49.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:49.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:50.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:50.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:50.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:51.690: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:51.690: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:51.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:52.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:52.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:52.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:53.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:53.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:53.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:54.684: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:54.684: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:54.689: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:55.683: INFO: Wrong image for pod: daemon-set-db4w6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 12 21:27:55.683: INFO: Pod daemon-set-db4w6 is not available Mar 12 21:27:55.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:56.683: INFO: Pod daemon-set-7p2nr is not available Mar 12 21:27:56.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 12 21:27:56.690: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:56.693: INFO: Number of nodes with available pods: 1 Mar 12 21:27:56.693: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:27:57.697: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:27:57.701: INFO: Number of nodes with available pods: 2 Mar 12 21:27:57.701: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3945, will wait for the garbage collector to delete the pods Mar 12 21:27:57.773: INFO: Deleting DaemonSet.extensions daemon-set took: 6.103293ms Mar 12 21:27:58.073: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.205058ms Mar 12 21:28:06.082: INFO: Number of nodes with available pods: 0 Mar 12 21:28:06.082: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 21:28:06.084: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3945/daemonsets","resourceVersion":"1242535"},"items":null} Mar 12 21:28:06.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3945/pods","resourceVersion":"1242535"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:28:06.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3945" for this suite. • [SLOW TEST:27.612 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":112,"skipped":1797,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:28:06.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9222 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 21:28:06.197: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 21:28:32.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.182:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9222 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:28:32.344: INFO: >>> kubeConfig: /root/.kube/config I0312 21:28:32.379378 6 log.go:172] (0xc00223c840) (0xc0013754a0) Create stream I0312 21:28:32.379418 6 log.go:172] (0xc00223c840) (0xc0013754a0) Stream added, broadcasting: 1 I0312 21:28:32.381548 6 log.go:172] (0xc00223c840) Reply frame received for 1 I0312 21:28:32.381588 6 log.go:172] (0xc00223c840) (0xc001db7a40) Create stream I0312 21:28:32.381600 6 log.go:172] (0xc00223c840) (0xc001db7a40) Stream added, broadcasting: 3 I0312 21:28:32.383116 6 log.go:172] (0xc00223c840) Reply frame received for 3 I0312 21:28:32.383143 6 log.go:172] (0xc00223c840) (0xc0013755e0) Create stream I0312 21:28:32.383157 6 log.go:172] (0xc00223c840) (0xc0013755e0) Stream added, broadcasting: 5 I0312 21:28:32.383978 6 log.go:172] (0xc00223c840) Reply frame received for 5 I0312 21:28:32.452178 6 log.go:172] (0xc00223c840) Data frame received for 3 I0312 21:28:32.452218 6 log.go:172] (0xc001db7a40) (3) Data frame handling I0312 21:28:32.452246 6 log.go:172] (0xc001db7a40) (3) Data frame sent I0312 21:28:32.452405 6 log.go:172] (0xc00223c840) Data frame received for 3 I0312 21:28:32.452428 6 log.go:172] (0xc001db7a40) (3) Data frame handling I0312 21:28:32.452582 6 log.go:172] (0xc00223c840) Data frame received for 5 I0312 21:28:32.452605 6 log.go:172] (0xc0013755e0) (5) Data frame handling I0312 21:28:32.453777 6 log.go:172] (0xc00223c840) Data frame received for 1 I0312 21:28:32.453800 6 log.go:172] (0xc0013754a0) (1) Data frame handling I0312 21:28:32.453817 6 log.go:172] (0xc0013754a0) (1) Data frame sent I0312 21:28:32.453832 6 log.go:172] (0xc00223c840) (0xc0013754a0) Stream removed, broadcasting: 1 I0312 21:28:32.453973 6 log.go:172] (0xc00223c840) (0xc0013754a0) Stream removed, broadcasting: 1 I0312 21:28:32.453991 6 log.go:172] (0xc00223c840) (0xc001db7a40) Stream removed, broadcasting: 3 I0312 21:28:32.454186 6 log.go:172] (0xc00223c840) (0xc0013755e0) Stream removed, broadcasting: 5 I0312 21:28:32.454283 6 log.go:172] (0xc00223c840) Go away received Mar 12 21:28:32.454: INFO: Found all expected endpoints: [netserver-0] Mar 12 21:28:32.457: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.180:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9222 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:28:32.457: INFO: >>> kubeConfig: /root/.kube/config I0312 21:28:32.485817 6 log.go:172] (0xc00223cdc0) (0xc001440140) Create stream I0312 21:28:32.485842 6 log.go:172] (0xc00223cdc0) (0xc001440140) Stream added, broadcasting: 1 I0312 21:28:32.488461 6 log.go:172] (0xc00223cdc0) Reply frame received for 1 I0312 21:28:32.488512 6 log.go:172] (0xc00223cdc0) (0xc001db7ae0) Create stream I0312 21:28:32.488525 6 log.go:172] (0xc00223cdc0) (0xc001db7ae0) Stream added, broadcasting: 3 I0312 21:28:32.489422 6 log.go:172] (0xc00223cdc0) Reply frame received for 3 I0312 21:28:32.489515 6 log.go:172] (0xc00223cdc0) (0xc000adba40) Create stream I0312 21:28:32.489545 6 log.go:172] (0xc00223cdc0) (0xc000adba40) Stream added, broadcasting: 5 I0312 21:28:32.490674 6 log.go:172] (0xc00223cdc0) Reply frame received for 5 I0312 21:28:32.543706 6 log.go:172] (0xc00223cdc0) Data frame received for 3 I0312 21:28:32.543739 6 log.go:172] (0xc001db7ae0) (3) Data frame handling I0312 21:28:32.543760 6 log.go:172] (0xc001db7ae0) (3) Data frame sent I0312 21:28:32.543774 6 log.go:172] (0xc00223cdc0) Data frame received for 3 I0312 21:28:32.543780 6 log.go:172] (0xc001db7ae0) (3) Data frame handling I0312 21:28:32.544134 6 log.go:172] (0xc00223cdc0) Data frame received for 5 I0312 21:28:32.544153 6 log.go:172] (0xc000adba40) (5) Data frame handling I0312 21:28:32.545202 6 log.go:172] (0xc00223cdc0) Data frame received for 1 I0312 21:28:32.545229 6 log.go:172] (0xc001440140) (1) Data frame handling I0312 21:28:32.545246 6 log.go:172] (0xc001440140) (1) Data frame sent I0312 21:28:32.545260 6 log.go:172] (0xc00223cdc0) (0xc001440140) Stream removed, broadcasting: 1 I0312 21:28:32.545289 6 log.go:172] (0xc00223cdc0) Go away received I0312 21:28:32.545351 6 log.go:172] (0xc00223cdc0) (0xc001440140) Stream removed, broadcasting: 1 I0312 21:28:32.545366 6 log.go:172] (0xc00223cdc0) (0xc001db7ae0) Stream removed, broadcasting: 3 I0312 21:28:32.545376 6 log.go:172] (0xc00223cdc0) (0xc000adba40) Stream removed, broadcasting: 5 Mar 12 21:28:32.545: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:28:32.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9222" for this suite. • [SLOW TEST:26.427 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1814,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:28:32.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 12 21:28:33.145: INFO: Pod name wrapped-volume-race-cc388f0d-8027-4659-b670-62125481ada2: Found 0 pods out of 5 Mar 12 21:28:38.150: INFO: Pod name wrapped-volume-race-cc388f0d-8027-4659-b670-62125481ada2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cc388f0d-8027-4659-b670-62125481ada2 in namespace emptydir-wrapper-7215, will wait for the garbage collector to delete the pods Mar 12 21:28:48.314: INFO: Deleting ReplicationController wrapped-volume-race-cc388f0d-8027-4659-b670-62125481ada2 took: 4.447987ms Mar 12 21:28:48.614: INFO: Terminating ReplicationController wrapped-volume-race-cc388f0d-8027-4659-b670-62125481ada2 pods took: 300.183825ms STEP: Creating RC which spawns configmap-volume pods Mar 12 21:28:56.456: INFO: Pod name wrapped-volume-race-1a698fd3-1311-42d7-b502-e4a7c6a368f6: Found 0 pods out of 5 Mar 12 21:29:01.461: INFO: Pod name wrapped-volume-race-1a698fd3-1311-42d7-b502-e4a7c6a368f6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1a698fd3-1311-42d7-b502-e4a7c6a368f6 in namespace emptydir-wrapper-7215, will wait for the garbage collector to delete the pods Mar 12 21:29:11.554: INFO: Deleting ReplicationController wrapped-volume-race-1a698fd3-1311-42d7-b502-e4a7c6a368f6 took: 7.322539ms Mar 12 21:29:11.954: INFO: Terminating ReplicationController wrapped-volume-race-1a698fd3-1311-42d7-b502-e4a7c6a368f6 pods took: 400.287724ms STEP: Creating RC which spawns configmap-volume pods Mar 12 21:29:17.687: INFO: Pod name wrapped-volume-race-ef77b64f-4e7e-4691-b1bb-63979990be5c: Found 0 pods out of 5 Mar 12 21:29:22.699: INFO: Pod name wrapped-volume-race-ef77b64f-4e7e-4691-b1bb-63979990be5c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ef77b64f-4e7e-4691-b1bb-63979990be5c in namespace emptydir-wrapper-7215, will wait for the garbage collector to delete the pods Mar 12 21:29:34.773: INFO: Deleting ReplicationController wrapped-volume-race-ef77b64f-4e7e-4691-b1bb-63979990be5c took: 4.167327ms Mar 12 21:29:35.173: INFO: Terminating ReplicationController wrapped-volume-race-ef77b64f-4e7e-4691-b1bb-63979990be5c pods took: 400.195983ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:29:46.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7215" for this suite. • [SLOW TEST:74.093 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":114,"skipped":1824,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:29:46.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-82862835-1e97-43c1-8602-1faaa8b5422b STEP: Creating a pod to test consume configMaps Mar 12 21:29:46.691: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6" in namespace "projected-165" to be "success or failure" Mar 12 21:29:46.720: INFO: Pod "pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.865488ms Mar 12 21:29:48.727: INFO: Pod "pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035618922s STEP: Saw pod success Mar 12 21:29:48.727: INFO: Pod "pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6" satisfied condition "success or failure" Mar 12 21:29:48.729: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6 container projected-configmap-volume-test: STEP: delete the pod Mar 12 21:29:48.773: INFO: Waiting for pod pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6 to disappear Mar 12 21:29:48.776: INFO: Pod pod-projected-configmaps-b2c05c7d-6fa9-43b4-9afb-89c546fb4df6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:29:48.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-165" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1835,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:29:48.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9712 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9712 STEP: creating replication controller externalsvc in namespace services-9712 I0312 21:29:48.945230 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9712, replica count: 2 I0312 21:29:51.995575 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 12 21:29:52.065: INFO: Creating new exec pod Mar 12 21:29:54.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9712 execpodbzf56 -- /bin/sh -x -c nslookup nodeport-service' Mar 12 21:29:55.852: INFO: stderr: "I0312 21:29:55.765719 944 log.go:172] (0xc0003c3290) (0xc00083c0a0) Create stream\nI0312 21:29:55.765784 944 log.go:172] (0xc0003c3290) (0xc00083c0a0) Stream added, broadcasting: 1\nI0312 21:29:55.768430 944 log.go:172] (0xc0003c3290) Reply frame received for 1\nI0312 21:29:55.768478 944 log.go:172] (0xc0003c3290) (0xc0007f80a0) Create stream\nI0312 21:29:55.768486 944 log.go:172] (0xc0003c3290) (0xc0007f80a0) Stream added, broadcasting: 3\nI0312 21:29:55.769447 944 log.go:172] (0xc0003c3290) Reply frame received for 3\nI0312 21:29:55.769480 944 log.go:172] (0xc0003c3290) (0xc0007d0000) Create stream\nI0312 21:29:55.769491 944 log.go:172] (0xc0003c3290) (0xc0007d0000) Stream added, broadcasting: 5\nI0312 21:29:55.770325 944 log.go:172] (0xc0003c3290) Reply frame received for 5\nI0312 21:29:55.836860 944 log.go:172] (0xc0003c3290) Data frame received for 5\nI0312 21:29:55.836884 944 log.go:172] (0xc0007d0000) (5) Data frame handling\nI0312 21:29:55.836898 944 log.go:172] (0xc0007d0000) (5) Data frame sent\n+ nslookup nodeport-service\nI0312 21:29:55.845511 944 log.go:172] (0xc0003c3290) Data frame received for 3\nI0312 21:29:55.845528 944 log.go:172] (0xc0007f80a0) (3) Data frame handling\nI0312 21:29:55.845544 944 log.go:172] (0xc0007f80a0) (3) Data frame sent\nI0312 21:29:55.846796 944 log.go:172] (0xc0003c3290) Data frame received for 3\nI0312 21:29:55.846818 944 log.go:172] (0xc0007f80a0) (3) Data frame handling\nI0312 21:29:55.846836 944 log.go:172] (0xc0007f80a0) (3) Data frame sent\nI0312 21:29:55.847159 944 log.go:172] (0xc0003c3290) Data frame received for 5\nI0312 21:29:55.847177 944 log.go:172] (0xc0007d0000) (5) Data frame handling\nI0312 21:29:55.847330 944 log.go:172] (0xc0003c3290) Data frame received for 3\nI0312 21:29:55.847348 944 log.go:172] (0xc0007f80a0) (3) Data frame handling\nI0312 21:29:55.848913 944 log.go:172] (0xc0003c3290) Data frame received for 1\nI0312 21:29:55.848935 944 log.go:172] (0xc00083c0a0) (1) Data frame handling\nI0312 21:29:55.848948 944 log.go:172] (0xc00083c0a0) (1) Data frame sent\nI0312 21:29:55.848963 944 log.go:172] (0xc0003c3290) (0xc00083c0a0) Stream removed, broadcasting: 1\nI0312 21:29:55.849291 944 log.go:172] (0xc0003c3290) (0xc00083c0a0) Stream removed, broadcasting: 1\nI0312 21:29:55.849310 944 log.go:172] (0xc0003c3290) (0xc0007f80a0) Stream removed, broadcasting: 3\nI0312 21:29:55.849322 944 log.go:172] (0xc0003c3290) (0xc0007d0000) Stream removed, broadcasting: 5\n" Mar 12 21:29:55.852: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9712.svc.cluster.local\tcanonical name = externalsvc.services-9712.svc.cluster.local.\nName:\texternalsvc.services-9712.svc.cluster.local\nAddress: 10.111.114.89\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9712, will wait for the garbage collector to delete the pods Mar 12 21:29:55.926: INFO: Deleting ReplicationController externalsvc took: 20.229946ms Mar 12 21:29:56.026: INFO: Terminating ReplicationController externalsvc pods took: 100.260941ms Mar 12 21:30:06.176: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:30:06.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9712" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.447 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":116,"skipped":1850,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:30:06.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 12 21:30:06.307: INFO: Waiting up to 5m0s for pod "client-containers-85846a91-4080-4cd5-a002-402f5491ac35" in namespace "containers-7682" to be "success or failure" Mar 12 21:30:06.343: INFO: Pod "client-containers-85846a91-4080-4cd5-a002-402f5491ac35": Phase="Pending", Reason="", readiness=false. Elapsed: 36.0072ms Mar 12 21:30:08.346: INFO: Pod "client-containers-85846a91-4080-4cd5-a002-402f5491ac35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039007162s STEP: Saw pod success Mar 12 21:30:08.346: INFO: Pod "client-containers-85846a91-4080-4cd5-a002-402f5491ac35" satisfied condition "success or failure" Mar 12 21:30:08.348: INFO: Trying to get logs from node jerma-worker pod client-containers-85846a91-4080-4cd5-a002-402f5491ac35 container test-container: STEP: delete the pod Mar 12 21:30:08.379: INFO: Waiting for pod client-containers-85846a91-4080-4cd5-a002-402f5491ac35 to disappear Mar 12 21:30:08.383: INFO: Pod client-containers-85846a91-4080-4cd5-a002-402f5491ac35 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:30:08.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7682" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1862,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:30:08.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:30:33.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8244" for this suite. • [SLOW TEST:25.398 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:30:33.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-6698 STEP: Creating a pod to test atomic-volume-subpath Mar 12 21:30:33.864: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6698" in namespace "subpath-8482" to be "success or failure" Mar 12 21:30:33.869: INFO: Pod "pod-subpath-test-projected-6698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.693476ms Mar 12 21:30:35.872: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 2.007419027s Mar 12 21:30:37.876: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 4.011405611s Mar 12 21:30:39.880: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 6.015467636s Mar 12 21:30:41.884: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 8.019306621s Mar 12 21:30:43.887: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 10.022940744s Mar 12 21:30:45.891: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 12.026818631s Mar 12 21:30:47.894: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 14.029844187s Mar 12 21:30:49.898: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 16.033664572s Mar 12 21:30:51.908: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 18.043649706s Mar 12 21:30:53.912: INFO: Pod "pod-subpath-test-projected-6698": Phase="Running", Reason="", readiness=true. Elapsed: 20.047562573s Mar 12 21:30:55.916: INFO: Pod "pod-subpath-test-projected-6698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051333036s STEP: Saw pod success Mar 12 21:30:55.916: INFO: Pod "pod-subpath-test-projected-6698" satisfied condition "success or failure" Mar 12 21:30:55.918: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-6698 container test-container-subpath-projected-6698: STEP: delete the pod Mar 12 21:30:55.940: INFO: Waiting for pod pod-subpath-test-projected-6698 to disappear Mar 12 21:30:55.975: INFO: Pod pod-subpath-test-projected-6698 no longer exists STEP: Deleting pod pod-subpath-test-projected-6698 Mar 12 21:30:55.975: INFO: Deleting pod "pod-subpath-test-projected-6698" in namespace "subpath-8482" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:30:55.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8482" for this suite. • [SLOW TEST:22.280 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":119,"skipped":1972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:30:56.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 12 21:30:56.102: INFO: >>> kubeConfig: /root/.kube/config Mar 12 21:30:57.936: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:31:08.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1748" for this suite. • [SLOW TEST:11.985 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":120,"skipped":2007,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:31:08.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:31:08.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:31:10.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719645468, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719645468, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719645468, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719645468, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:31:13.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:31:13.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:31:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6981" for this suite. STEP: Destroying namespace "webhook-6981-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.987 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":121,"skipped":2018,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:31:15.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:31:31.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8274" for this suite. • [SLOW TEST:16.253 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":122,"skipped":2032,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:31:31.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-818079ef-2017-41f9-aa01-f2e976a3b8fd STEP: Creating a pod to test consume configMaps Mar 12 21:31:31.379: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d" in namespace "projected-5875" to be "success or failure" Mar 12 21:31:31.393: INFO: Pod "pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.494448ms Mar 12 21:31:33.398: INFO: Pod "pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019163238s STEP: Saw pod success Mar 12 21:31:33.398: INFO: Pod "pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d" satisfied condition "success or failure" Mar 12 21:31:33.404: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d container projected-configmap-volume-test: STEP: delete the pod Mar 12 21:31:33.446: INFO: Waiting for pod pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d to disappear Mar 12 21:31:33.448: INFO: Pod pod-projected-configmaps-12d50e99-04b0-474b-a169-9f0aa2da3b0d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:31:33.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5875" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2049,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:31:33.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 12 21:31:33.522: INFO: Waiting up to 5m0s for pod "pod-729df0ec-9005-4dcc-9b0e-646783c648c6" in namespace "emptydir-7477" to be "success or failure" Mar 12 21:31:33.526: INFO: Pod "pod-729df0ec-9005-4dcc-9b0e-646783c648c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.49121ms Mar 12 21:31:35.530: INFO: Pod "pod-729df0ec-9005-4dcc-9b0e-646783c648c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007105477s STEP: Saw pod success Mar 12 21:31:35.530: INFO: Pod "pod-729df0ec-9005-4dcc-9b0e-646783c648c6" satisfied condition "success or failure" Mar 12 21:31:35.532: INFO: Trying to get logs from node jerma-worker2 pod pod-729df0ec-9005-4dcc-9b0e-646783c648c6 container test-container: STEP: delete the pod Mar 12 21:31:35.569: INFO: Waiting for pod pod-729df0ec-9005-4dcc-9b0e-646783c648c6 to disappear Mar 12 21:31:35.574: INFO: Pod pod-729df0ec-9005-4dcc-9b0e-646783c648c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:31:35.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7477" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2058,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:31:35.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 21:31:35.630: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 21:31:35.656: INFO: Waiting for terminating namespaces to be deleted... Mar 12 21:31:35.658: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 21:31:35.663: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:31:35.663: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:31:35.663: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:31:35.663: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:31:35.663: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 21:31:35.667: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:31:35.667: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:31:35.667: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:31:35.667: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-311040d8-b043-45b6-9208-26c36c229720 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-311040d8-b043-45b6-9208-26c36c229720 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-311040d8-b043-45b6-9208-26c36c229720 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:36:41.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6871" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:306.281 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":125,"skipped":2071,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:36:41.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 12 21:36:41.938: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3183" to be "success or failure" Mar 12 21:36:41.969: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.337662ms Mar 12 21:36:43.973: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035104752s STEP: Saw pod success Mar 12 21:36:43.973: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 12 21:36:43.975: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 12 21:36:44.004: INFO: Waiting for pod pod-host-path-test to disappear Mar 12 21:36:44.028: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:36:44.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3183" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:36:44.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:36:44.082: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:36:45.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1068" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":127,"skipped":2113,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:36:45.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-156 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 21:36:45.375: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 21:37:07.482: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.203:8080/dial?request=hostname&protocol=http&host=10.244.2.196&port=8080&tries=1'] Namespace:pod-network-test-156 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:37:07.482: INFO: >>> kubeConfig: /root/.kube/config I0312 21:37:07.514868 6 log.go:172] (0xc001af89a0) (0xc000aecf00) Create stream I0312 21:37:07.514899 6 log.go:172] (0xc001af89a0) (0xc000aecf00) Stream added, broadcasting: 1 I0312 21:37:07.517871 6 log.go:172] (0xc001af89a0) Reply frame received for 1 I0312 21:37:07.517927 6 log.go:172] (0xc001af89a0) (0xc0019fc0a0) Create stream I0312 21:37:07.517946 6 log.go:172] (0xc001af89a0) (0xc0019fc0a0) Stream added, broadcasting: 3 I0312 21:37:07.518975 6 log.go:172] (0xc001af89a0) Reply frame received for 3 I0312 21:37:07.519031 6 log.go:172] (0xc001af89a0) (0xc000aecfa0) Create stream I0312 21:37:07.519044 6 log.go:172] (0xc001af89a0) (0xc000aecfa0) Stream added, broadcasting: 5 I0312 21:37:07.520350 6 log.go:172] (0xc001af89a0) Reply frame received for 5 I0312 21:37:07.597785 6 log.go:172] (0xc001af89a0) Data frame received for 5 I0312 21:37:07.597818 6 log.go:172] (0xc000aecfa0) (5) Data frame handling I0312 21:37:07.597865 6 log.go:172] (0xc001af89a0) Data frame received for 3 I0312 21:37:07.597895 6 log.go:172] (0xc0019fc0a0) (3) Data frame handling I0312 21:37:07.597920 6 log.go:172] (0xc0019fc0a0) (3) Data frame sent I0312 21:37:07.597933 6 log.go:172] (0xc001af89a0) Data frame received for 3 I0312 21:37:07.597946 6 log.go:172] (0xc0019fc0a0) (3) Data frame handling I0312 21:37:07.599193 6 log.go:172] (0xc001af89a0) Data frame received for 1 I0312 21:37:07.599217 6 log.go:172] (0xc000aecf00) (1) Data frame handling I0312 21:37:07.599232 6 log.go:172] (0xc000aecf00) (1) Data frame sent I0312 21:37:07.599247 6 log.go:172] (0xc001af89a0) (0xc000aecf00) Stream removed, broadcasting: 1 I0312 21:37:07.599266 6 log.go:172] (0xc001af89a0) Go away received I0312 21:37:07.599438 6 log.go:172] (0xc001af89a0) (0xc000aecf00) Stream removed, broadcasting: 1 I0312 21:37:07.599459 6 log.go:172] (0xc001af89a0) (0xc0019fc0a0) Stream removed, broadcasting: 3 I0312 21:37:07.599467 6 log.go:172] (0xc001af89a0) (0xc000aecfa0) Stream removed, broadcasting: 5 Mar 12 21:37:07.599: INFO: Waiting for responses: map[] Mar 12 21:37:07.602: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.203:8080/dial?request=hostname&protocol=http&host=10.244.1.202&port=8080&tries=1'] Namespace:pod-network-test-156 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 21:37:07.602: INFO: >>> kubeConfig: /root/.kube/config I0312 21:37:07.630092 6 log.go:172] (0xc001eeaa50) (0xc0028486e0) Create stream I0312 21:37:07.630149 6 log.go:172] (0xc001eeaa50) (0xc0028486e0) Stream added, broadcasting: 1 I0312 21:37:07.632633 6 log.go:172] (0xc001eeaa50) Reply frame received for 1 I0312 21:37:07.632663 6 log.go:172] (0xc001eeaa50) (0xc000aed0e0) Create stream I0312 21:37:07.632690 6 log.go:172] (0xc001eeaa50) (0xc000aed0e0) Stream added, broadcasting: 3 I0312 21:37:07.633605 6 log.go:172] (0xc001eeaa50) Reply frame received for 3 I0312 21:37:07.633664 6 log.go:172] (0xc001eeaa50) (0xc001e3c000) Create stream I0312 21:37:07.633682 6 log.go:172] (0xc001eeaa50) (0xc001e3c000) Stream added, broadcasting: 5 I0312 21:37:07.634580 6 log.go:172] (0xc001eeaa50) Reply frame received for 5 I0312 21:37:07.698063 6 log.go:172] (0xc001eeaa50) Data frame received for 3 I0312 21:37:07.698093 6 log.go:172] (0xc000aed0e0) (3) Data frame handling I0312 21:37:07.698180 6 log.go:172] (0xc000aed0e0) (3) Data frame sent I0312 21:37:07.698421 6 log.go:172] (0xc001eeaa50) Data frame received for 5 I0312 21:37:07.698445 6 log.go:172] (0xc001e3c000) (5) Data frame handling I0312 21:37:07.698731 6 log.go:172] (0xc001eeaa50) Data frame received for 3 I0312 21:37:07.698747 6 log.go:172] (0xc000aed0e0) (3) Data frame handling I0312 21:37:07.700180 6 log.go:172] (0xc001eeaa50) Data frame received for 1 I0312 21:37:07.700197 6 log.go:172] (0xc0028486e0) (1) Data frame handling I0312 21:37:07.700209 6 log.go:172] (0xc0028486e0) (1) Data frame sent I0312 21:37:07.700222 6 log.go:172] (0xc001eeaa50) (0xc0028486e0) Stream removed, broadcasting: 1 I0312 21:37:07.700365 6 log.go:172] (0xc001eeaa50) Go away received I0312 21:37:07.700422 6 log.go:172] (0xc001eeaa50) (0xc0028486e0) Stream removed, broadcasting: 1 I0312 21:37:07.700449 6 log.go:172] (0xc001eeaa50) (0xc000aed0e0) Stream removed, broadcasting: 3 I0312 21:37:07.700481 6 log.go:172] (0xc001eeaa50) (0xc001e3c000) Stream removed, broadcasting: 5 Mar 12 21:37:07.700: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:37:07.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-156" for this suite. • [SLOW TEST:22.380 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:37:07.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-5836237a-b6ce-40ea-931a-9ce7c431e682 in namespace container-probe-4096 Mar 12 21:37:09.831: INFO: Started pod busybox-5836237a-b6ce-40ea-931a-9ce7c431e682 in namespace container-probe-4096 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 21:37:09.834: INFO: Initial restart count of pod busybox-5836237a-b6ce-40ea-931a-9ce7c431e682 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:10.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4096" for this suite. • [SLOW TEST:242.713 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:10.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 21:41:10.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2825' Mar 12 21:41:12.210: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 21:41:12.210: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Mar 12 21:41:12.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-2825' Mar 12 21:41:12.315: INFO: stderr: "" Mar 12 21:41:12.315: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:12.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2825" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":130,"skipped":2209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:12.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:41:12.377: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 10.990448ms) Mar 12 21:41:12.380: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.314533ms) Mar 12 21:41:12.382: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.08794ms) Mar 12 21:41:12.384: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.944952ms) Mar 12 21:41:12.386: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.474293ms) Mar 12 21:41:12.414: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 28.256626ms) Mar 12 21:41:12.417: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.358554ms) Mar 12 21:41:12.419: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.498933ms) Mar 12 21:41:12.422: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.659431ms) Mar 12 21:41:12.424: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.031842ms) Mar 12 21:41:12.426: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.135336ms) Mar 12 21:41:12.428: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.949931ms) Mar 12 21:41:12.431: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.271151ms) Mar 12 21:41:12.432: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.805397ms) Mar 12 21:41:12.434: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.992325ms) Mar 12 21:41:12.437: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.166791ms) Mar 12 21:41:12.439: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.203455ms) Mar 12 21:41:12.440: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.65132ms) Mar 12 21:41:12.442: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.873854ms) Mar 12 21:41:12.444: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.966398ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:12.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6684" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":131,"skipped":2241,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:12.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 21:41:14.564: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:14.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6075" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2245,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:14.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-6f93706a-7a08-47e4-9b80-e98910132984 STEP: Creating a pod to test consume secrets Mar 12 21:41:14.662: INFO: Waiting up to 5m0s for pod "pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e" in namespace "secrets-2001" to be "success or failure" Mar 12 21:41:14.681: INFO: Pod "pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.969307ms Mar 12 21:41:16.683: INFO: Pod "pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021249435s STEP: Saw pod success Mar 12 21:41:16.683: INFO: Pod "pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e" satisfied condition "success or failure" Mar 12 21:41:16.685: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e container secret-volume-test: STEP: delete the pod Mar 12 21:41:16.708: INFO: Waiting for pod pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e to disappear Mar 12 21:41:16.713: INFO: Pod pod-secrets-7a427015-3712-4893-81e4-f73d5380c62e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:16.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2001" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2264,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:16.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 21:41:16.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2287' Mar 12 21:41:16.843: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 21:41:16.843: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 12 21:41:16.975: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-cllbv] Mar 12 21:41:16.975: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-cllbv" in namespace "kubectl-2287" to be "running and ready" Mar 12 21:41:16.990: INFO: Pod "e2e-test-httpd-rc-cllbv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.930758ms Mar 12 21:41:18.994: INFO: Pod "e2e-test-httpd-rc-cllbv": Phase="Running", Reason="", readiness=true. Elapsed: 2.018906461s Mar 12 21:41:18.994: INFO: Pod "e2e-test-httpd-rc-cllbv" satisfied condition "running and ready" Mar 12 21:41:18.994: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-cllbv] Mar 12 21:41:18.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2287' Mar 12 21:41:19.160: INFO: stderr: "" Mar 12 21:41:19.160: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.200. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.200. Set the 'ServerName' directive globally to suppress this message\n[Thu Mar 12 21:41:18.147879 2020] [mpm_event:notice] [pid 1:tid 139675877383016] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Mar 12 21:41:18.147921 2020] [core:notice] [pid 1:tid 139675877383016] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 12 21:41:19.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2287' Mar 12 21:41:19.256: INFO: stderr: "" Mar 12 21:41:19.256: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:19.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2287" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":134,"skipped":2276,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:19.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-af434031-015b-4338-9c9c-3002666129c3 STEP: Creating a pod to test consume configMaps Mar 12 21:41:19.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e" in namespace "configmap-2828" to be "success or failure" Mar 12 21:41:19.356: INFO: Pod "pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310858ms Mar 12 21:41:21.360: INFO: Pod "pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006065265s STEP: Saw pod success Mar 12 21:41:21.360: INFO: Pod "pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e" satisfied condition "success or failure" Mar 12 21:41:21.363: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e container configmap-volume-test: STEP: delete the pod Mar 12 21:41:21.387: INFO: Waiting for pod pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e to disappear Mar 12 21:41:21.391: INFO: Pod pod-configmaps-596cbbe9-d21e-43ad-8f82-30387f60d65e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:41:21.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2828" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2284,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:41:21.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-82ee77b1-628e-449d-836f-f23b2c482f89 in namespace container-probe-9888 Mar 12 21:41:23.516: INFO: Started pod test-webserver-82ee77b1-628e-449d-836f-f23b2c482f89 in namespace container-probe-9888 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 21:41:23.518: INFO: Initial restart count of pod test-webserver-82ee77b1-628e-449d-836f-f23b2c482f89 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:45:24.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9888" for this suite. • [SLOW TEST:242.704 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2292,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:45:24.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0312 21:45:34.207790 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 21:45:34.207: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:45:34.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9799" for this suite. • [SLOW TEST:10.113 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":137,"skipped":2310,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:45:34.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 21:45:38.335: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 21:45:38.352: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 21:45:40.352: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 21:45:40.355: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 21:45:42.352: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 21:45:42.354: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 21:45:44.352: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 21:45:44.368: INFO: Pod pod-with-prestop-exec-hook still exists Mar 12 21:45:46.352: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 12 21:45:46.374: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:45:46.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8569" for this suite. • [SLOW TEST:12.181 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:45:46.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:45:46.455: INFO: Create a RollingUpdate DaemonSet Mar 12 21:45:46.458: INFO: Check that daemon pods launch on every node of the cluster Mar 12 21:45:46.461: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:46.493: INFO: Number of nodes with available pods: 0 Mar 12 21:45:46.493: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:45:47.497: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:47.500: INFO: Number of nodes with available pods: 0 Mar 12 21:45:47.500: INFO: Node jerma-worker is running more than one daemon pod Mar 12 21:45:48.498: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:48.501: INFO: Number of nodes with available pods: 1 Mar 12 21:45:48.501: INFO: Node jerma-worker2 is running more than one daemon pod Mar 12 21:45:49.497: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:49.499: INFO: Number of nodes with available pods: 2 Mar 12 21:45:49.499: INFO: Number of running nodes: 2, number of available pods: 2 Mar 12 21:45:49.499: INFO: Update the DaemonSet to trigger a rollout Mar 12 21:45:49.503: INFO: Updating DaemonSet daemon-set Mar 12 21:45:56.520: INFO: Roll back the DaemonSet before rollout is complete Mar 12 21:45:56.525: INFO: Updating DaemonSet daemon-set Mar 12 21:45:56.525: INFO: Make sure DaemonSet rollback is complete Mar 12 21:45:56.535: INFO: Wrong image for pod: daemon-set-bczsm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 21:45:56.535: INFO: Pod daemon-set-bczsm is not available Mar 12 21:45:56.542: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:57.548: INFO: Wrong image for pod: daemon-set-bczsm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 21:45:57.548: INFO: Pod daemon-set-bczsm is not available Mar 12 21:45:57.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:58.546: INFO: Wrong image for pod: daemon-set-bczsm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 12 21:45:58.546: INFO: Pod daemon-set-bczsm is not available Mar 12 21:45:58.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 12 21:45:59.546: INFO: Pod daemon-set-wv9j9 is not available Mar 12 21:45:59.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1365, will wait for the garbage collector to delete the pods Mar 12 21:45:59.618: INFO: Deleting DaemonSet.extensions daemon-set took: 13.064567ms Mar 12 21:45:59.918: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.18933ms Mar 12 21:46:06.128: INFO: Number of nodes with available pods: 0 Mar 12 21:46:06.128: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 21:46:06.130: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1365/daemonsets","resourceVersion":"1247577"},"items":null} Mar 12 21:46:06.132: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1365/pods","resourceVersion":"1247577"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:06.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1365" for this suite. • [SLOW TEST:19.748 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":139,"skipped":2366,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:06.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-935d6976-a7fb-416c-bd39-8aed30b93667 STEP: Creating a pod to test consume configMaps Mar 12 21:46:06.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96" in namespace "configmap-6187" to be "success or failure" Mar 12 21:46:06.221: INFO: Pod "pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674448ms Mar 12 21:46:08.224: INFO: Pod "pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00730623s STEP: Saw pod success Mar 12 21:46:08.224: INFO: Pod "pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96" satisfied condition "success or failure" Mar 12 21:46:08.227: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96 container configmap-volume-test: STEP: delete the pod Mar 12 21:46:08.258: INFO: Waiting for pod pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96 to disappear Mar 12 21:46:08.263: INFO: Pod pod-configmaps-88f28e71-6edb-4f68-9ecf-9367348b3f96 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:08.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6187" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2380,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:08.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:46:08.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4482' Mar 12 21:46:08.567: INFO: stderr: "" Mar 12 21:46:08.567: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 12 21:46:08.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4482' Mar 12 21:46:08.816: INFO: stderr: "" Mar 12 21:46:08.816: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 21:46:09.820: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:46:09.820: INFO: Found 0 / 1 Mar 12 21:46:10.820: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:46:10.821: INFO: Found 1 / 1 Mar 12 21:46:10.821: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 21:46:10.824: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 21:46:10.824: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 21:46:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-256tt --namespace=kubectl-4482' Mar 12 21:46:10.935: INFO: stderr: "" Mar 12 21:46:10.935: INFO: stdout: "Name: agnhost-master-256tt\nNamespace: kubectl-4482\nPriority: 0\nNode: jerma-worker/172.17.0.4\nStart Time: Thu, 12 Mar 2020 21:46:08 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.208\nIPs:\n IP: 10.244.2.208\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d7df216810a4dfe464e14a698386cb04ac29dc4ebbcdc58df72000bcb33d9d62\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 12 Mar 2020 21:46:09 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4cw65 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4cw65:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4cw65\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-4482/agnhost-master-256tt to jerma-worker\n Normal Pulled 1s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" Mar 12 21:46:10.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4482' Mar 12 21:46:11.053: INFO: stderr: "" Mar 12 21:46:11.054: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4482\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-256tt\n" Mar 12 21:46:11.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4482' Mar 12 21:46:11.138: INFO: stderr: "" Mar 12 21:46:11.138: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4482\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.213.209\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.208:6379\nSession Affinity: None\nEvents: \n" Mar 12 21:46:11.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 12 21:46:11.292: INFO: stderr: "" Mar 12 21:46:11.292: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:47:04 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 12 Mar 2020 21:46:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 12 Mar 2020 21:45:49 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 12 Mar 2020 21:45:49 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 12 Mar 2020 21:45:49 +0000 Sun, 08 Mar 2020 14:47:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 12 Mar 2020 21:45:49 +0000 Sun, 08 Mar 2020 14:48:18 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 3f4950fefd574d4aaa94513c5781e5d9\n System UUID: 58a385c4-2d08-428a-9405-5e6b12d5bd17\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-6n4ms 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d6h\n kube-system coredns-6955765f44-nlwfn 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d6h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kindnet-2glhp 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d6h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-proxy-zmch2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n local-path-storage local-path-provisioner-85445b74d4-gpcbt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 12 21:46:11.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4482' Mar 12 21:46:11.372: INFO: stderr: "" Mar 12 21:46:11.372: INFO: stdout: "Name: kubectl-4482\nLabels: e2e-framework=kubectl\n e2e-run=07ba79d4-33f5-4122-9a8c-8ab1a2bd106d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4482" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":141,"skipped":2396,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:11.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 21:46:11.473: INFO: Waiting up to 5m0s for pod "pod-ab4dc85d-9f58-4454-8050-d041fcfd003c" in namespace "emptydir-2451" to be "success or failure" Mar 12 21:46:11.535: INFO: Pod "pod-ab4dc85d-9f58-4454-8050-d041fcfd003c": Phase="Pending", Reason="", readiness=false. Elapsed: 62.682049ms Mar 12 21:46:13.538: INFO: Pod "pod-ab4dc85d-9f58-4454-8050-d041fcfd003c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065384976s STEP: Saw pod success Mar 12 21:46:13.538: INFO: Pod "pod-ab4dc85d-9f58-4454-8050-d041fcfd003c" satisfied condition "success or failure" Mar 12 21:46:13.540: INFO: Trying to get logs from node jerma-worker pod pod-ab4dc85d-9f58-4454-8050-d041fcfd003c container test-container: STEP: delete the pod Mar 12 21:46:13.557: INFO: Waiting for pod pod-ab4dc85d-9f58-4454-8050-d041fcfd003c to disappear Mar 12 21:46:13.562: INFO: Pod pod-ab4dc85d-9f58-4454-8050-d041fcfd003c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:13.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2451" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:13.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:46:13.649: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 21:46:16.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8699 create -f -' Mar 12 21:46:18.623: INFO: stderr: "" Mar 12 21:46:18.623: INFO: stdout: "e2e-test-crd-publish-openapi-4774-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 21:46:18.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8699 delete e2e-test-crd-publish-openapi-4774-crds test-cr' Mar 12 21:46:18.761: INFO: stderr: "" Mar 12 21:46:18.761: INFO: stdout: "e2e-test-crd-publish-openapi-4774-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 12 21:46:18.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8699 apply -f -' Mar 12 21:46:19.000: INFO: stderr: "" Mar 12 21:46:19.000: INFO: stdout: "e2e-test-crd-publish-openapi-4774-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 21:46:19.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8699 delete e2e-test-crd-publish-openapi-4774-crds test-cr' Mar 12 21:46:19.089: INFO: stderr: "" Mar 12 21:46:19.089: INFO: stdout: "e2e-test-crd-publish-openapi-4774-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 21:46:19.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4774-crds' Mar 12 21:46:19.298: INFO: stderr: "" Mar 12 21:46:19.298: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4774-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:22.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8699" for this suite. • [SLOW TEST:8.471 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":143,"skipped":2461,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:22.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 12 21:46:22.121: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 12 21:46:32.112: INFO: >>> kubeConfig: /root/.kube/config Mar 12 21:46:34.943: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:46:43.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4066" for this suite. • [SLOW TEST:21.937 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":144,"skipped":2468,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:46:43.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 21:46:44.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4717' Mar 12 21:46:44.101: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 21:46:44.101: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 12 21:46:44.103: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 12 21:46:44.108: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 12 21:46:44.134: INFO: scanned /root for discovery docs: Mar 12 21:46:44.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4717' Mar 12 21:47:00.616: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 21:47:00.616: INFO: stdout: "Created e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb\nScaling up e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 12 21:47:00.616: INFO: stdout: "Created e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb\nScaling up e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 12 21:47:00.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4717' Mar 12 21:47:00.712: INFO: stderr: "" Mar 12 21:47:00.712: INFO: stdout: "e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb-m2592 " Mar 12 21:47:00.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb-m2592 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717' Mar 12 21:47:00.802: INFO: stderr: "" Mar 12 21:47:00.802: INFO: stdout: "true" Mar 12 21:47:00.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb-m2592 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717' Mar 12 21:47:00.883: INFO: stderr: "" Mar 12 21:47:00.884: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 12 21:47:00.884: INFO: e2e-test-httpd-rc-7de1b947a1399e751f5fb3fbc056e1cb-m2592 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Mar 12 21:47:00.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4717' Mar 12 21:47:00.985: INFO: stderr: "" Mar 12 21:47:00.985: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:00.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4717" for this suite. • [SLOW TEST:17.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":145,"skipped":2472,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:00.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:03.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9777" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2476,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:03.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:47:03.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e" in namespace "projected-8172" to be "success or failure" Mar 12 21:47:03.145: INFO: Pod "downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.923837ms Mar 12 21:47:05.149: INFO: Pod "downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00880167s STEP: Saw pod success Mar 12 21:47:05.149: INFO: Pod "downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e" satisfied condition "success or failure" Mar 12 21:47:05.152: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e container client-container: STEP: delete the pod Mar 12 21:47:05.209: INFO: Waiting for pod downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e to disappear Mar 12 21:47:05.217: INFO: Pod downwardapi-volume-13c578cf-13bb-4aaa-b013-9902bbc7350e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:05.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8172" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:05.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 12 21:47:05.329: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7655 /api/v1/namespaces/watch-7655/configmaps/e2e-watch-test-watch-closed c9723f5f-7fe9-4da9-aaf2-f731e6bfd9fe 1248022 0 2020-03-12 21:47:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 21:47:05.329: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7655 /api/v1/namespaces/watch-7655/configmaps/e2e-watch-test-watch-closed c9723f5f-7fe9-4da9-aaf2-f731e6bfd9fe 1248023 0 2020-03-12 21:47:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 12 21:47:05.341: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7655 /api/v1/namespaces/watch-7655/configmaps/e2e-watch-test-watch-closed c9723f5f-7fe9-4da9-aaf2-f731e6bfd9fe 1248024 0 2020-03-12 21:47:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 21:47:05.341: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7655 /api/v1/namespaces/watch-7655/configmaps/e2e-watch-test-watch-closed c9723f5f-7fe9-4da9-aaf2-f731e6bfd9fe 1248025 0 2020-03-12 21:47:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:05.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7655" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":148,"skipped":2518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:05.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 12 21:47:05.401: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix893681245/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:05.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5835" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":149,"skipped":2549,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:05.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:47:05.539: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 12 21:47:08.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 create -f -' Mar 12 21:47:10.435: INFO: stderr: "" Mar 12 21:47:10.435: INFO: stdout: "e2e-test-crd-publish-openapi-6317-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 21:47:10.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 delete e2e-test-crd-publish-openapi-6317-crds test-foo' Mar 12 21:47:10.527: INFO: stderr: "" Mar 12 21:47:10.528: INFO: stdout: "e2e-test-crd-publish-openapi-6317-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 12 21:47:10.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 apply -f -' Mar 12 21:47:10.725: INFO: stderr: "" Mar 12 21:47:10.725: INFO: stdout: "e2e-test-crd-publish-openapi-6317-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 21:47:10.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 delete e2e-test-crd-publish-openapi-6317-crds test-foo' Mar 12 21:47:10.803: INFO: stderr: "" Mar 12 21:47:10.803: INFO: stdout: "e2e-test-crd-publish-openapi-6317-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 12 21:47:10.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 create -f -' Mar 12 21:47:11.031: INFO: rc: 1 Mar 12 21:47:11.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 apply -f -' Mar 12 21:47:11.246: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 12 21:47:11.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 create -f -' Mar 12 21:47:11.465: INFO: rc: 1 Mar 12 21:47:11.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6504 apply -f -' Mar 12 21:47:11.731: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 12 21:47:11.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6317-crds' Mar 12 21:47:12.001: INFO: stderr: "" Mar 12 21:47:12.001: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6317-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 12 21:47:12.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6317-crds.metadata' Mar 12 21:47:12.194: INFO: stderr: "" Mar 12 21:47:12.194: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6317-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 12 21:47:12.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6317-crds.spec' Mar 12 21:47:12.391: INFO: stderr: "" Mar 12 21:47:12.391: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6317-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 12 21:47:12.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6317-crds.spec.bars' Mar 12 21:47:12.599: INFO: stderr: "" Mar 12 21:47:12.599: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6317-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 12 21:47:12.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6317-crds.spec.bars2' Mar 12 21:47:12.823: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:15.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6504" for this suite. • [SLOW TEST:10.141 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":150,"skipped":2563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:15.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:47:16.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:47:18.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646436, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646436, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646436, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646436, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:47:21.205: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:21.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3636" for this suite. STEP: Destroying namespace "webhook-3636-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.063 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":151,"skipped":2608,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:21.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-ab268860-a57a-4ad0-94f0-ddb57ed95a88 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:23.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-330" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:23.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 12 21:47:23.865: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:23.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1467" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":153,"skipped":2656,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:23.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 12 21:47:24.041: INFO: Waiting up to 5m0s for pod "pod-492ba0e4-5630-4348-bbed-9a5a223d4f83" in namespace "emptydir-9702" to be "success or failure" Mar 12 21:47:24.044: INFO: Pod "pod-492ba0e4-5630-4348-bbed-9a5a223d4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 3.854548ms Mar 12 21:47:26.051: INFO: Pod "pod-492ba0e4-5630-4348-bbed-9a5a223d4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010391513s Mar 12 21:47:28.055: INFO: Pod "pod-492ba0e4-5630-4348-bbed-9a5a223d4f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013969751s STEP: Saw pod success Mar 12 21:47:28.055: INFO: Pod "pod-492ba0e4-5630-4348-bbed-9a5a223d4f83" satisfied condition "success or failure" Mar 12 21:47:28.057: INFO: Trying to get logs from node jerma-worker pod pod-492ba0e4-5630-4348-bbed-9a5a223d4f83 container test-container: STEP: delete the pod Mar 12 21:47:28.086: INFO: Waiting for pod pod-492ba0e4-5630-4348-bbed-9a5a223d4f83 to disappear Mar 12 21:47:28.091: INFO: Pod pod-492ba0e4-5630-4348-bbed-9a5a223d4f83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9702" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2657,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:28.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-0508a5ad-8e14-4f6f-9c65-956524d4cb1f STEP: Creating a pod to test consume configMaps Mar 12 21:47:28.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666" in namespace "configmap-8561" to be "success or failure" Mar 12 21:47:28.169: INFO: Pod "pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541132ms Mar 12 21:47:30.177: INFO: Pod "pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012235246s STEP: Saw pod success Mar 12 21:47:30.177: INFO: Pod "pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666" satisfied condition "success or failure" Mar 12 21:47:30.179: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666 container configmap-volume-test: STEP: delete the pod Mar 12 21:47:30.205: INFO: Waiting for pod pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666 to disappear Mar 12 21:47:30.209: INFO: Pod pod-configmaps-c82cd5f5-33cc-4855-a1e5-f8f112d01666 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:30.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8561" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:30.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:47:31.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:47:34.059: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:47:34.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5456" for this suite. STEP: Destroying namespace "webhook-5456-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":156,"skipped":2688,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:47:34.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9266 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9266 STEP: Creating statefulset with conflicting port in namespace statefulset-9266 STEP: Waiting until pod test-pod will start running in namespace statefulset-9266 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9266 Mar 12 21:47:38.332: INFO: Observed stateful pod in namespace: statefulset-9266, name: ss-0, uid: d568f4d4-e885-4983-b64d-4c520d710790, status phase: Pending. Waiting for statefulset controller to delete. Mar 12 21:47:38.885: INFO: Observed stateful pod in namespace: statefulset-9266, name: ss-0, uid: d568f4d4-e885-4983-b64d-4c520d710790, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 21:47:38.892: INFO: Observed stateful pod in namespace: statefulset-9266, name: ss-0, uid: d568f4d4-e885-4983-b64d-4c520d710790, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 21:47:38.917: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9266 STEP: Removing pod with conflicting port in namespace statefulset-9266 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9266 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 21:47:43.018: INFO: Deleting all statefulset in ns statefulset-9266 Mar 12 21:47:43.021: INFO: Scaling statefulset ss to 0 Mar 12 21:48:03.064: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:48:03.066: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:48:03.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9266" for this suite. • [SLOW TEST:28.877 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":157,"skipped":2697,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:48:03.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 21:48:03.524: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 12 21:48:05.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646483, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646483, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:48:08.566: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:48:08.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:48:09.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8992" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.733 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":158,"skipped":2704,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:48:09.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 21:48:09.971: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 21:48:09.983: INFO: Waiting for terminating namespaces to be deleted... Mar 12 21:48:09.985: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 21:48:09.989: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:48:09.989: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 21:48:09.989: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:48:09.989: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:48:09.989: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 21:48:09.992: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:48:09.992: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 21:48:09.992: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 21:48:09.992: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 12 21:48:10.123: INFO: Pod kindnet-gxwrl requesting resource cpu=100m on Node jerma-worker Mar 12 21:48:10.123: INFO: Pod kindnet-x9bds requesting resource cpu=100m on Node jerma-worker2 Mar 12 21:48:10.123: INFO: Pod kube-proxy-dvgp7 requesting resource cpu=0m on Node jerma-worker Mar 12 21:48:10.123: INFO: Pod kube-proxy-xqsww requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 12 21:48:10.123: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 12 21:48:10.127: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8.15fbaccceef6abf5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4794/filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8.15fbaccd203c3610], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8.15fbaccd30b3bb15], Reason = [Created], Message = [Created container filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8] STEP: Considering event: Type = [Normal], Name = [filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8.15fbaccd3c290eb7], Reason = [Started], Message = [Started container filler-pod-8bc9f1f7-e419-4a8d-9225-41a79fc7c8e8] STEP: Considering event: Type = [Normal], Name = [filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d.15fbacccef53443b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4794/filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d.15fbaccd1f9daef9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d.15fbaccd31105eff], Reason = [Created], Message = [Created container filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d] STEP: Considering event: Type = [Normal], Name = [filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d.15fbaccd3c290ecd], Reason = [Started], Message = [Started container filler-pod-d484565f-f814-4af9-b902-9eb28ab8f91d] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fbaccdde61fae6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fbaccddf2c959a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:48:15.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4794" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.438 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":159,"skipped":2706,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:48:15.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f in namespace container-probe-3769 Mar 12 21:48:17.318: INFO: Started pod liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f in namespace container-probe-3769 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 21:48:17.320: INFO: Initial restart count of pod liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is 0 Mar 12 21:48:29.364: INFO: Restart count of pod container-probe-3769/liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is now 1 (12.043878209s elapsed) Mar 12 21:48:49.397: INFO: Restart count of pod container-probe-3769/liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is now 2 (32.077245707s elapsed) Mar 12 21:49:09.454: INFO: Restart count of pod container-probe-3769/liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is now 3 (52.133915667s elapsed) Mar 12 21:49:29.492: INFO: Restart count of pod container-probe-3769/liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is now 4 (1m12.171790545s elapsed) Mar 12 21:50:29.600: INFO: Restart count of pod container-probe-3769/liveness-a07a89a8-9bd5-467c-b343-892e6eb4440f is now 5 (2m12.279875738s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:50:29.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3769" for this suite. • [SLOW TEST:134.406 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2712,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:50:29.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8584/secret-test-693c90a4-2993-44c1-937e-4d5f7152fa2f STEP: Creating a pod to test consume secrets Mar 12 21:50:29.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010" in namespace "secrets-8584" to be "success or failure" Mar 12 21:50:29.766: INFO: Pod "pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010": Phase="Pending", Reason="", readiness=false. Elapsed: 4.983401ms Mar 12 21:50:31.770: INFO: Pod "pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008586932s STEP: Saw pod success Mar 12 21:50:31.770: INFO: Pod "pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010" satisfied condition "success or failure" Mar 12 21:50:31.772: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010 container env-test: STEP: delete the pod Mar 12 21:50:31.804: INFO: Waiting for pod pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010 to disappear Mar 12 21:50:31.808: INFO: Pod pod-configmaps-43dc1a2e-b9e1-4f09-813f-81c2169dc010 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:50:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8584" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2728,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:50:31.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:50:32.609: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:50:35.641: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 12 21:50:37.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5166 to-be-attached-pod -i -c=container1' Mar 12 21:50:37.838: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:50:37.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5166" for this suite. STEP: Destroying namespace "webhook-5166-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.098 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":162,"skipped":2730,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:50:37.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:50:37.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56" in namespace "downward-api-9537" to be "success or failure" Mar 12 21:50:38.000: INFO: Pod "downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145716ms Mar 12 21:50:40.003: INFO: Pod "downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008047249s STEP: Saw pod success Mar 12 21:50:40.003: INFO: Pod "downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56" satisfied condition "success or failure" Mar 12 21:50:40.006: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56 container client-container: STEP: delete the pod Mar 12 21:50:40.042: INFO: Waiting for pod downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56 to disappear Mar 12 21:50:40.057: INFO: Pod downwardapi-volume-9f7710a8-4221-4b4d-9b30-fdeaab623c56 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:50:40.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9537" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2734,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:50:40.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:50:40.114: INFO: Creating deployment "test-recreate-deployment" Mar 12 21:50:40.130: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 12 21:50:40.143: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 12 21:50:42.149: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 12 21:50:42.151: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 12 21:50:42.156: INFO: Updating deployment test-recreate-deployment Mar 12 21:50:42.156: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 21:50:42.478: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9590 /apis/apps/v1/namespaces/deployment-9590/deployments/test-recreate-deployment 659421d9-2ebe-44c5-bd70-1cc53ce1ef75 1249413 2 2020-03-12 21:50:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ce6948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-12 21:50:42 +0000 UTC,LastTransitionTime:2020-03-12 21:50:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-12 21:50:42 +0000 UTC,LastTransitionTime:2020-03-12 21:50:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 12 21:50:42.497: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9590 /apis/apps/v1/namespaces/deployment-9590/replicasets/test-recreate-deployment-5f94c574ff dc5a50f6-de9f-4610-8070-9e5dac606a51 1249412 1 2020-03-12 21:50:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 659421d9-2ebe-44c5-bd70-1cc53ce1ef75 0xc00569f887 0xc00569f888}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00569f8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 21:50:42.497: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 12 21:50:42.497: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-9590 /apis/apps/v1/namespaces/deployment-9590/replicasets/test-recreate-deployment-799c574856 3608c304-9467-4e0a-a77a-a0643e3b1c43 1249402 2 2020-03-12 21:50:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 659421d9-2ebe-44c5-bd70-1cc53ce1ef75 0xc00569f957 0xc00569f958}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00569f9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 21:50:42.499: INFO: Pod "test-recreate-deployment-5f94c574ff-vjd8d" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-vjd8d test-recreate-deployment-5f94c574ff- deployment-9590 /api/v1/namespaces/deployment-9590/pods/test-recreate-deployment-5f94c574ff-vjd8d 081e997b-9d6a-414c-ac70-b178618988ad 1249414 0 2020-03-12 21:50:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff dc5a50f6-de9f-4610-8070-9e5dac606a51 0xc004ce6ce7 0xc004ce6ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kp99s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kp99s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kp99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:50:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:50:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 21:50:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-12 21:50:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:50:42.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9590" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":164,"skipped":2741,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:50:42.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jfkp STEP: Creating a pod to test atomic-volume-subpath Mar 12 21:50:42.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jfkp" in namespace "subpath-8217" to be "success or failure" Mar 12 21:50:42.747: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Pending", Reason="", readiness=false. Elapsed: 70.260329ms Mar 12 21:50:44.751: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 2.073899242s Mar 12 21:50:46.754: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 4.077405686s Mar 12 21:50:48.757: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 6.080818593s Mar 12 21:50:50.761: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 8.084429608s Mar 12 21:50:52.765: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 10.088286s Mar 12 21:50:54.768: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 12.091783406s Mar 12 21:50:56.772: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 14.095569829s Mar 12 21:50:58.776: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 16.099301187s Mar 12 21:51:00.781: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 18.103879058s Mar 12 21:51:02.784: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Running", Reason="", readiness=true. Elapsed: 20.1076824s Mar 12 21:51:04.788: INFO: Pod "pod-subpath-test-configmap-jfkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.111160687s STEP: Saw pod success Mar 12 21:51:04.788: INFO: Pod "pod-subpath-test-configmap-jfkp" satisfied condition "success or failure" Mar 12 21:51:04.790: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jfkp container test-container-subpath-configmap-jfkp: STEP: delete the pod Mar 12 21:51:04.821: INFO: Waiting for pod pod-subpath-test-configmap-jfkp to disappear Mar 12 21:51:04.826: INFO: Pod pod-subpath-test-configmap-jfkp no longer exists STEP: Deleting pod pod-subpath-test-configmap-jfkp Mar 12 21:51:04.826: INFO: Deleting pod "pod-subpath-test-configmap-jfkp" in namespace "subpath-8217" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:04.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8217" for this suite. • [SLOW TEST:22.332 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":165,"skipped":2744,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:04.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d Mar 12 21:51:04.910: INFO: Pod name my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d: Found 0 pods out of 1 Mar 12 21:51:09.913: INFO: Pod name my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d: Found 1 pods out of 1 Mar 12 21:51:09.913: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d" are running Mar 12 21:51:09.928: INFO: Pod "my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d-hd6vc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:51:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:51:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:51:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-12 21:51:04 +0000 UTC Reason: Message:}]) Mar 12 21:51:09.928: INFO: Trying to dial the pod Mar 12 21:51:14.939: INFO: Controller my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d: Got expected result from replica 1 [my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d-hd6vc]: "my-hostname-basic-1841829b-bbd0-4967-aa66-1a29f0ec6d3d-hd6vc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:14.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-136" for this suite. • [SLOW TEST:10.109 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":166,"skipped":2746,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:14.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 21:51:21.074: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 21:51:21.078: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 21:51:23.079: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 21:51:23.083: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 21:51:25.079: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 21:51:25.082: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:25.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7466" for this suite. • [SLOW TEST:10.149 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2746,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:25.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:36.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8549" for this suite. • [SLOW TEST:11.184 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":168,"skipped":2747,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:36.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6819 to expose endpoints map[] Mar 12 21:51:36.372: INFO: Get endpoints failed (5.992776ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 12 21:51:37.376: INFO: successfully validated that service endpoint-test2 in namespace services-6819 exposes endpoints map[] (1.009930412s elapsed) STEP: Creating pod pod1 in namespace services-6819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6819 to expose endpoints map[pod1:[80]] Mar 12 21:51:40.445: INFO: successfully validated that service endpoint-test2 in namespace services-6819 exposes endpoints map[pod1:[80]] (3.060830562s elapsed) STEP: Creating pod pod2 in namespace services-6819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6819 to expose endpoints map[pod1:[80] pod2:[80]] Mar 12 21:51:42.528: INFO: successfully validated that service endpoint-test2 in namespace services-6819 exposes endpoints map[pod1:[80] pod2:[80]] (2.079810185s elapsed) STEP: Deleting pod pod1 in namespace services-6819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6819 to expose endpoints map[pod2:[80]] Mar 12 21:51:43.589: INFO: successfully validated that service endpoint-test2 in namespace services-6819 exposes endpoints map[pod2:[80]] (1.057059779s elapsed) STEP: Deleting pod pod2 in namespace services-6819 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6819 to expose endpoints map[] Mar 12 21:51:44.634: INFO: successfully validated that service endpoint-test2 in namespace services-6819 exposes endpoints map[] (1.006690431s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:44.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6819" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.397 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":169,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:44.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:51:45.248: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:51:47.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646705, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646705, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646705, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:51:50.312: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3378" for this suite. STEP: Destroying namespace "webhook-3378-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":170,"skipped":2770,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:50.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:51:50.895: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:51.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8759" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":171,"skipped":2776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:51.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 21:51:52.044: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:56.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2095" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":172,"skipped":2812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:56.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d431ea50-7c3b-458d-acf4-9ac2c8a305d7 STEP: Creating a pod to test consume secrets Mar 12 21:51:56.326: INFO: Waiting up to 5m0s for pod "pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9" in namespace "secrets-3184" to be "success or failure" Mar 12 21:51:56.370: INFO: Pod "pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 43.705353ms Mar 12 21:51:58.373: INFO: Pod "pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.046768703s STEP: Saw pod success Mar 12 21:51:58.373: INFO: Pod "pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9" satisfied condition "success or failure" Mar 12 21:51:58.375: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9 container secret-volume-test: STEP: delete the pod Mar 12 21:51:58.399: INFO: Waiting for pod pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9 to disappear Mar 12 21:51:58.409: INFO: Pod pod-secrets-34bdcf6e-0612-4c5b-acd1-465da050e5c9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:51:58.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3184" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:51:58.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:51:58.506: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9a2505bf-b5e3-4372-865c-878d4f8caeb7", Controller:(*bool)(0xc003713052), BlockOwnerDeletion:(*bool)(0xc003713053)}} Mar 12 21:51:58.525: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"26ec1527-15fe-414e-8c83-c0848879046f", Controller:(*bool)(0xc005684902), BlockOwnerDeletion:(*bool)(0xc005684903)}} Mar 12 21:51:58.530: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d33231d9-89c9-4612-bd40-f3bb3a0887a9", Controller:(*bool)(0xc005649ce2), BlockOwnerDeletion:(*bool)(0xc005649ce3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:52:03.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3398" for this suite. • [SLOW TEST:5.188 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":174,"skipped":2867,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:52:03.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:52:03.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386" in namespace "downward-api-1135" to be "success or failure" Mar 12 21:52:03.712: INFO: Pod "downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386": Phase="Pending", Reason="", readiness=false. Elapsed: 34.769598ms Mar 12 21:52:05.715: INFO: Pod "downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038455698s STEP: Saw pod success Mar 12 21:52:05.716: INFO: Pod "downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386" satisfied condition "success or failure" Mar 12 21:52:05.719: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386 container client-container: STEP: delete the pod Mar 12 21:52:05.742: INFO: Waiting for pod downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386 to disappear Mar 12 21:52:05.745: INFO: Pod downwardapi-volume-365e6879-85d1-4e91-bd66-ea8d7ce9f386 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:52:05.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1135" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2872,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:52:05.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 21:52:08.364: INFO: Successfully updated pod "labelsupdatefb01c239-41c8-483f-b744-9d73ba67d724" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:52:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2196" for this suite. • [SLOW TEST:6.667 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2873,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:52:12.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 12 21:52:14.487: INFO: Pod pod-hostip-6b0862ae-57f0-46e7-93f3-4c5c20e242c9 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:52:14.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8756" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2880,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:52:14.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Mar 12 21:52:14.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3728 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 12 21:52:14.707: INFO: stderr: "" Mar 12 21:52:14.707: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 12 21:52:14.707: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 12 21:52:14.707: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3728" to be "running and ready, or succeeded" Mar 12 21:52:14.716: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.754622ms Mar 12 21:52:16.719: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.011685911s Mar 12 21:52:16.719: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 12 21:52:16.719: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 12 21:52:16.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728' Mar 12 21:52:16.820: INFO: stderr: "" Mar 12 21:52:16.820: INFO: stdout: "I0312 21:52:15.771979 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/pkl 483\nI0312 21:52:15.972104 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z9cw 553\nI0312 21:52:16.172172 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/579p 342\nI0312 21:52:16.372119 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/drn6 433\nI0312 21:52:16.572108 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/srl 217\nI0312 21:52:16.772109 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/s5q8 560\n" STEP: limiting log lines Mar 12 21:52:16.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728 --tail=1' Mar 12 21:52:16.896: INFO: stderr: "" Mar 12 21:52:16.896: INFO: stdout: "I0312 21:52:16.772109 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/s5q8 560\n" Mar 12 21:52:16.896: INFO: got output "I0312 21:52:16.772109 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/s5q8 560\n" STEP: limiting log bytes Mar 12 21:52:16.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728 --limit-bytes=1' Mar 12 21:52:16.969: INFO: stderr: "" Mar 12 21:52:16.969: INFO: stdout: "I" Mar 12 21:52:16.969: INFO: got output "I" STEP: exposing timestamps Mar 12 21:52:16.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728 --tail=1 --timestamps' Mar 12 21:52:17.035: INFO: stderr: "" Mar 12 21:52:17.035: INFO: stdout: "2020-03-12T21:52:16.972196111Z I0312 21:52:16.972094 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/klm 441\n" Mar 12 21:52:17.035: INFO: got output "2020-03-12T21:52:16.972196111Z I0312 21:52:16.972094 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/klm 441\n" STEP: restricting to a time range Mar 12 21:52:19.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728 --since=1s' Mar 12 21:52:19.655: INFO: stderr: "" Mar 12 21:52:19.655: INFO: stdout: "I0312 21:52:18.772144 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/2jhv 479\nI0312 21:52:18.972170 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/kmt7 462\nI0312 21:52:19.172189 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sxw 541\nI0312 21:52:19.372155 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/mtk8 422\nI0312 21:52:19.572148 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/45ws 547\n" Mar 12 21:52:19.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3728 --since=24h' Mar 12 21:52:19.740: INFO: stderr: "" Mar 12 21:52:19.740: INFO: stdout: "I0312 21:52:15.771979 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/pkl 483\nI0312 21:52:15.972104 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z9cw 553\nI0312 21:52:16.172172 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/579p 342\nI0312 21:52:16.372119 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/drn6 433\nI0312 21:52:16.572108 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/srl 217\nI0312 21:52:16.772109 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/s5q8 560\nI0312 21:52:16.972094 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/klm 441\nI0312 21:52:17.172147 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/282 597\nI0312 21:52:17.372152 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/2qh 399\nI0312 21:52:17.572119 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/nfw8 589\nI0312 21:52:17.772158 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/khm 537\nI0312 21:52:17.972169 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/m582 459\nI0312 21:52:18.172132 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/zkw 302\nI0312 21:52:18.372177 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/z7c 378\nI0312 21:52:18.572176 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/c2f4 539\nI0312 21:52:18.772144 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/2jhv 479\nI0312 21:52:18.972170 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/kmt7 462\nI0312 21:52:19.172189 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sxw 541\nI0312 21:52:19.372155 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/mtk8 422\nI0312 21:52:19.572148 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/45ws 547\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Mar 12 21:52:19.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3728' Mar 12 21:52:26.092: INFO: stderr: "" Mar 12 21:52:26.092: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:52:26.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3728" for this suite. • [SLOW TEST:11.609 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":178,"skipped":2881,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:52:26.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:53:26.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6300" for this suite. • [SLOW TEST:60.081 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2885,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:53:26.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 12 21:53:26.239: INFO: PodSpec: initContainers in spec.initContainers Mar 12 21:54:12.443: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-20ccb788-eabc-46c5-a9c4-abeed51dfe6f", GenerateName:"", Namespace:"init-container-9473", SelfLink:"/api/v1/namespaces/init-container-9473/pods/pod-init-20ccb788-eabc-46c5-a9c4-abeed51dfe6f", UID:"0d3764b5-1d2d-4ac0-949b-0fee6f62a4a5", ResourceVersion:"1250623", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719646806, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"239949334"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bfpgc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004bdc400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bfpgc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bfpgc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bfpgc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002fd85f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029301e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fd8680)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fd86a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002fd86a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002fd86ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646806, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646806, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.228", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.228"}}, StartTime:(*v1.Time)(0xc0056061e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010982a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001098380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://aa5088ed12a45446e39da54aa62eaa738d0b7dfbb5b848ed8fc6d48d6bcb00ba", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005606220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005606200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002fd872f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:54:12.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9473" for this suite. • [SLOW TEST:46.308 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":180,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:54:12.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d476dec5-1e1f-4249-95b4-3704382efa01 STEP: Creating configMap with name cm-test-opt-upd-c38f22cd-0fb8-4730-846e-947709a699e1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d476dec5-1e1f-4249-95b4-3704382efa01 STEP: Updating configmap cm-test-opt-upd-c38f22cd-0fb8-4730-846e-947709a699e1 STEP: Creating configMap with name cm-test-opt-create-2572613c-14cc-471d-9e47-7f80fab919ca STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:55:47.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7008" for this suite. • [SLOW TEST:94.600 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2906,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:55:47.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 21:55:47.155: INFO: Waiting up to 5m0s for pod "downward-api-34cc9745-a92f-44e3-a339-14667e603417" in namespace "downward-api-3560" to be "success or failure" Mar 12 21:55:47.167: INFO: Pod "downward-api-34cc9745-a92f-44e3-a339-14667e603417": Phase="Pending", Reason="", readiness=false. Elapsed: 11.725116ms Mar 12 21:55:49.171: INFO: Pod "downward-api-34cc9745-a92f-44e3-a339-14667e603417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015956325s STEP: Saw pod success Mar 12 21:55:49.171: INFO: Pod "downward-api-34cc9745-a92f-44e3-a339-14667e603417" satisfied condition "success or failure" Mar 12 21:55:49.174: INFO: Trying to get logs from node jerma-worker2 pod downward-api-34cc9745-a92f-44e3-a339-14667e603417 container dapi-container: STEP: delete the pod Mar 12 21:55:49.206: INFO: Waiting for pod downward-api-34cc9745-a92f-44e3-a339-14667e603417 to disappear Mar 12 21:55:49.210: INFO: Pod downward-api-34cc9745-a92f-44e3-a339-14667e603417 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:55:49.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3560" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2916,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:55:49.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3510 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3510 STEP: creating replication controller externalsvc in namespace services-3510 I0312 21:55:49.367687 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3510, replica count: 2 I0312 21:55:52.418095 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 12 21:55:52.462: INFO: Creating new exec pod Mar 12 21:55:56.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3510 execpodmrs8h -- /bin/sh -x -c nslookup clusterip-service' Mar 12 21:55:56.686: INFO: stderr: "I0312 21:55:56.610212 1964 log.go:172] (0xc0009d5550) (0xc00092c820) Create stream\nI0312 21:55:56.610252 1964 log.go:172] (0xc0009d5550) (0xc00092c820) Stream added, broadcasting: 1\nI0312 21:55:56.614087 1964 log.go:172] (0xc0009d5550) Reply frame received for 1\nI0312 21:55:56.614174 1964 log.go:172] (0xc0009d5550) (0xc00064e640) Create stream\nI0312 21:55:56.614185 1964 log.go:172] (0xc0009d5550) (0xc00064e640) Stream added, broadcasting: 3\nI0312 21:55:56.614908 1964 log.go:172] (0xc0009d5550) Reply frame received for 3\nI0312 21:55:56.614936 1964 log.go:172] (0xc0009d5550) (0xc000559400) Create stream\nI0312 21:55:56.614948 1964 log.go:172] (0xc0009d5550) (0xc000559400) Stream added, broadcasting: 5\nI0312 21:55:56.615706 1964 log.go:172] (0xc0009d5550) Reply frame received for 5\nI0312 21:55:56.672580 1964 log.go:172] (0xc0009d5550) Data frame received for 5\nI0312 21:55:56.672606 1964 log.go:172] (0xc000559400) (5) Data frame handling\nI0312 21:55:56.672621 1964 log.go:172] (0xc000559400) (5) Data frame sent\n+ nslookup clusterip-service\nI0312 21:55:56.679100 1964 log.go:172] (0xc0009d5550) Data frame received for 3\nI0312 21:55:56.679131 1964 log.go:172] (0xc00064e640) (3) Data frame handling\nI0312 21:55:56.679145 1964 log.go:172] (0xc00064e640) (3) Data frame sent\nI0312 21:55:56.680904 1964 log.go:172] (0xc0009d5550) Data frame received for 3\nI0312 21:55:56.680925 1964 log.go:172] (0xc00064e640) (3) Data frame handling\nI0312 21:55:56.680939 1964 log.go:172] (0xc00064e640) (3) Data frame sent\nI0312 21:55:56.681080 1964 log.go:172] (0xc0009d5550) Data frame received for 5\nI0312 21:55:56.681107 1964 log.go:172] (0xc000559400) (5) Data frame handling\nI0312 21:55:56.681130 1964 log.go:172] (0xc0009d5550) Data frame received for 3\nI0312 21:55:56.681139 1964 log.go:172] (0xc00064e640) (3) Data frame handling\nI0312 21:55:56.682705 1964 log.go:172] (0xc0009d5550) Data frame received for 1\nI0312 21:55:56.682728 1964 log.go:172] (0xc00092c820) (1) Data frame handling\nI0312 21:55:56.682742 1964 log.go:172] (0xc00092c820) (1) Data frame sent\nI0312 21:55:56.682754 1964 log.go:172] (0xc0009d5550) (0xc00092c820) Stream removed, broadcasting: 1\nI0312 21:55:56.682809 1964 log.go:172] (0xc0009d5550) Go away received\nI0312 21:55:56.683019 1964 log.go:172] (0xc0009d5550) (0xc00092c820) Stream removed, broadcasting: 1\nI0312 21:55:56.683032 1964 log.go:172] (0xc0009d5550) (0xc00064e640) Stream removed, broadcasting: 3\nI0312 21:55:56.683039 1964 log.go:172] (0xc0009d5550) (0xc000559400) Stream removed, broadcasting: 5\n" Mar 12 21:55:56.686: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3510.svc.cluster.local\tcanonical name = externalsvc.services-3510.svc.cluster.local.\nName:\texternalsvc.services-3510.svc.cluster.local\nAddress: 10.98.238.17\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3510, will wait for the garbage collector to delete the pods Mar 12 21:55:56.743: INFO: Deleting ReplicationController externalsvc took: 4.428415ms Mar 12 21:55:57.044: INFO: Terminating ReplicationController externalsvc pods took: 300.221325ms Mar 12 21:56:00.979: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:56:01.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.827 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":183,"skipped":2920,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:56:01.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d9a60c7a-4b30-44f9-8b01-c5663f65e17e STEP: Creating a pod to test consume configMaps Mar 12 21:56:01.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487" in namespace "projected-8384" to be "success or failure" Mar 12 21:56:01.216: INFO: Pod "pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487": Phase="Pending", Reason="", readiness=false. Elapsed: 44.389888ms Mar 12 21:56:03.219: INFO: Pod "pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.047842742s STEP: Saw pod success Mar 12 21:56:03.219: INFO: Pod "pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487" satisfied condition "success or failure" Mar 12 21:56:03.222: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487 container projected-configmap-volume-test: STEP: delete the pod Mar 12 21:56:03.254: INFO: Waiting for pod pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487 to disappear Mar 12 21:56:03.265: INFO: Pod pod-projected-configmaps-231752a2-15f3-4ef3-967d-49da7de84487 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:56:03.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8384" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2926,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:56:03.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 21:56:03.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2" in namespace "downward-api-1825" to be "success or failure" Mar 12 21:56:03.363: INFO: Pod "downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.717885ms Mar 12 21:56:05.366: INFO: Pod "downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021516581s STEP: Saw pod success Mar 12 21:56:05.366: INFO: Pod "downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2" satisfied condition "success or failure" Mar 12 21:56:05.369: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2 container client-container: STEP: delete the pod Mar 12 21:56:05.386: INFO: Waiting for pod downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2 to disappear Mar 12 21:56:05.390: INFO: Pod downwardapi-volume-932a9325-445c-4b5b-93f8-847513ae2ff2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:56:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1825" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2934,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:56:05.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 21:56:05.872: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 21:56:07.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719646965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 21:56:10.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 21:56:10.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9940-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 21:56:12.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4301" for this suite. STEP: Destroying namespace "webhook-4301-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.680 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":186,"skipped":2953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 21:56:12.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2302 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2302 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2302 Mar 12 21:56:12.174: INFO: Found 0 stateful pods, waiting for 1 Mar 12 21:56:22.178: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 12 21:56:22.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:56:22.385: INFO: stderr: "I0312 21:56:22.312008 1985 log.go:172] (0xc000c333f0) (0xc0009c0640) Create stream\nI0312 21:56:22.312039 1985 log.go:172] (0xc000c333f0) (0xc0009c0640) Stream added, broadcasting: 1\nI0312 21:56:22.314963 1985 log.go:172] (0xc000c333f0) Reply frame received for 1\nI0312 21:56:22.315003 1985 log.go:172] (0xc000c333f0) (0xc00063e820) Create stream\nI0312 21:56:22.315020 1985 log.go:172] (0xc000c333f0) (0xc00063e820) Stream added, broadcasting: 3\nI0312 21:56:22.315580 1985 log.go:172] (0xc000c333f0) Reply frame received for 3\nI0312 21:56:22.315601 1985 log.go:172] (0xc000c333f0) (0xc0004295e0) Create stream\nI0312 21:56:22.315607 1985 log.go:172] (0xc000c333f0) (0xc0004295e0) Stream added, broadcasting: 5\nI0312 21:56:22.316271 1985 log.go:172] (0xc000c333f0) Reply frame received for 5\nI0312 21:56:22.364653 1985 log.go:172] (0xc000c333f0) Data frame received for 5\nI0312 21:56:22.364670 1985 log.go:172] (0xc0004295e0) (5) Data frame handling\nI0312 21:56:22.364680 1985 log.go:172] (0xc0004295e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:56:22.380911 1985 log.go:172] (0xc000c333f0) Data frame received for 3\nI0312 21:56:22.380928 1985 log.go:172] (0xc00063e820) (3) Data frame handling\nI0312 21:56:22.380946 1985 log.go:172] (0xc00063e820) (3) Data frame sent\nI0312 21:56:22.380963 1985 log.go:172] (0xc000c333f0) Data frame received for 5\nI0312 21:56:22.380968 1985 log.go:172] (0xc0004295e0) (5) Data frame handling\nI0312 21:56:22.381033 1985 log.go:172] (0xc000c333f0) Data frame received for 3\nI0312 21:56:22.381045 1985 log.go:172] (0xc00063e820) (3) Data frame handling\nI0312 21:56:22.382479 1985 log.go:172] (0xc000c333f0) Data frame received for 1\nI0312 21:56:22.382491 1985 log.go:172] (0xc0009c0640) (1) Data frame handling\nI0312 21:56:22.382497 1985 log.go:172] (0xc0009c0640) (1) Data frame sent\nI0312 21:56:22.382506 1985 log.go:172] (0xc000c333f0) (0xc0009c0640) Stream removed, broadcasting: 1\nI0312 21:56:22.382546 1985 log.go:172] (0xc000c333f0) Go away received\nI0312 21:56:22.382805 1985 log.go:172] (0xc000c333f0) (0xc0009c0640) Stream removed, broadcasting: 1\nI0312 21:56:22.382822 1985 log.go:172] (0xc000c333f0) (0xc00063e820) Stream removed, broadcasting: 3\nI0312 21:56:22.382829 1985 log.go:172] (0xc000c333f0) (0xc0004295e0) Stream removed, broadcasting: 5\n" Mar 12 21:56:22.385: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:56:22.385: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:56:22.388: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 21:56:32.392: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:56:32.392: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:56:32.405: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:32.405: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:32.405: INFO: Mar 12 21:56:32.405: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 12 21:56:33.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993845455s Mar 12 21:56:34.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990092207s Mar 12 21:56:35.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987073239s Mar 12 21:56:36.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982593544s Mar 12 21:56:37.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977940203s Mar 12 21:56:38.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.973893211s Mar 12 21:56:39.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970075833s Mar 12 21:56:40.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966164654s Mar 12 21:56:41.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.24024ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2302 Mar 12 21:56:42.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:56:42.659: INFO: stderr: "I0312 21:56:42.578934 2006 log.go:172] (0xc000bc8e70) (0xc0009a6500) Create stream\nI0312 21:56:42.578993 2006 log.go:172] (0xc000bc8e70) (0xc0009a6500) Stream added, broadcasting: 1\nI0312 21:56:42.583659 2006 log.go:172] (0xc000bc8e70) Reply frame received for 1\nI0312 21:56:42.583702 2006 log.go:172] (0xc000bc8e70) (0xc000642780) Create stream\nI0312 21:56:42.583715 2006 log.go:172] (0xc000bc8e70) (0xc000642780) Stream added, broadcasting: 3\nI0312 21:56:42.584710 2006 log.go:172] (0xc000bc8e70) Reply frame received for 3\nI0312 21:56:42.584735 2006 log.go:172] (0xc000bc8e70) (0xc000561540) Create stream\nI0312 21:56:42.584745 2006 log.go:172] (0xc000bc8e70) (0xc000561540) Stream added, broadcasting: 5\nI0312 21:56:42.585595 2006 log.go:172] (0xc000bc8e70) Reply frame received for 5\nI0312 21:56:42.652355 2006 log.go:172] (0xc000bc8e70) Data frame received for 5\nI0312 21:56:42.652385 2006 log.go:172] (0xc000561540) (5) Data frame handling\nI0312 21:56:42.652395 2006 log.go:172] (0xc000561540) (5) Data frame sent\nI0312 21:56:42.652404 2006 log.go:172] (0xc000bc8e70) Data frame received for 5\nI0312 21:56:42.652410 2006 log.go:172] (0xc000561540) (5) Data frame handling\nI0312 21:56:42.652421 2006 log.go:172] (0xc000bc8e70) Data frame received for 3\nI0312 21:56:42.652428 2006 log.go:172] (0xc000642780) (3) Data frame handling\nI0312 21:56:42.652436 2006 log.go:172] (0xc000642780) (3) Data frame sent\nI0312 21:56:42.652449 2006 log.go:172] (0xc000bc8e70) Data frame received for 3\nI0312 21:56:42.652456 2006 log.go:172] (0xc000642780) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 21:56:42.654297 2006 log.go:172] (0xc000bc8e70) Data frame received for 1\nI0312 21:56:42.654375 2006 log.go:172] (0xc0009a6500) (1) Data frame handling\nI0312 21:56:42.654410 2006 log.go:172] (0xc0009a6500) (1) Data frame sent\nI0312 21:56:42.654474 2006 log.go:172] (0xc000bc8e70) (0xc0009a6500) Stream removed, broadcasting: 1\nI0312 21:56:42.654524 2006 log.go:172] (0xc000bc8e70) Go away received\nI0312 21:56:42.654848 2006 log.go:172] (0xc000bc8e70) (0xc0009a6500) Stream removed, broadcasting: 1\nI0312 21:56:42.654887 2006 log.go:172] (0xc000bc8e70) (0xc000642780) Stream removed, broadcasting: 3\nI0312 21:56:42.654912 2006 log.go:172] (0xc000bc8e70) (0xc000561540) Stream removed, broadcasting: 5\n" Mar 12 21:56:42.659: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:56:42.659: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:56:42.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:56:42.826: INFO: stderr: "I0312 21:56:42.760071 2026 log.go:172] (0xc0000ec2c0) (0xc00063a5a0) Create stream\nI0312 21:56:42.760109 2026 log.go:172] (0xc0000ec2c0) (0xc00063a5a0) Stream added, broadcasting: 1\nI0312 21:56:42.761611 2026 log.go:172] (0xc0000ec2c0) Reply frame received for 1\nI0312 21:56:42.761631 2026 log.go:172] (0xc0000ec2c0) (0xc0004bd360) Create stream\nI0312 21:56:42.761636 2026 log.go:172] (0xc0000ec2c0) (0xc0004bd360) Stream added, broadcasting: 3\nI0312 21:56:42.762221 2026 log.go:172] (0xc0000ec2c0) Reply frame received for 3\nI0312 21:56:42.762247 2026 log.go:172] (0xc0000ec2c0) (0xc000950000) Create stream\nI0312 21:56:42.762255 2026 log.go:172] (0xc0000ec2c0) (0xc000950000) Stream added, broadcasting: 5\nI0312 21:56:42.762844 2026 log.go:172] (0xc0000ec2c0) Reply frame received for 5\nI0312 21:56:42.822841 2026 log.go:172] (0xc0000ec2c0) Data frame received for 3\nI0312 21:56:42.822869 2026 log.go:172] (0xc0004bd360) (3) Data frame handling\nI0312 21:56:42.822876 2026 log.go:172] (0xc0004bd360) (3) Data frame sent\nI0312 21:56:42.822882 2026 log.go:172] (0xc0000ec2c0) Data frame received for 3\nI0312 21:56:42.822886 2026 log.go:172] (0xc0004bd360) (3) Data frame handling\nI0312 21:56:42.822900 2026 log.go:172] (0xc0000ec2c0) Data frame received for 5\nI0312 21:56:42.822905 2026 log.go:172] (0xc000950000) (5) Data frame handling\nI0312 21:56:42.822910 2026 log.go:172] (0xc000950000) (5) Data frame sent\nI0312 21:56:42.822914 2026 log.go:172] (0xc0000ec2c0) Data frame received for 5\nI0312 21:56:42.822919 2026 log.go:172] (0xc000950000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 21:56:42.823753 2026 log.go:172] (0xc0000ec2c0) Data frame received for 1\nI0312 21:56:42.823777 2026 log.go:172] (0xc00063a5a0) (1) Data frame handling\nI0312 21:56:42.823789 2026 log.go:172] (0xc00063a5a0) (1) Data frame sent\nI0312 21:56:42.823798 2026 log.go:172] (0xc0000ec2c0) (0xc00063a5a0) Stream removed, broadcasting: 1\nI0312 21:56:42.823808 2026 log.go:172] (0xc0000ec2c0) Go away received\nI0312 21:56:42.824146 2026 log.go:172] (0xc0000ec2c0) (0xc00063a5a0) Stream removed, broadcasting: 1\nI0312 21:56:42.824160 2026 log.go:172] (0xc0000ec2c0) (0xc0004bd360) Stream removed, broadcasting: 3\nI0312 21:56:42.824169 2026 log.go:172] (0xc0000ec2c0) (0xc000950000) Stream removed, broadcasting: 5\n" Mar 12 21:56:42.826: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:56:42.826: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:56:42.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:56:43.005: INFO: stderr: "I0312 21:56:42.910425 2044 log.go:172] (0xc000102c60) (0xc0006c3d60) Create stream\nI0312 21:56:42.910475 2044 log.go:172] (0xc000102c60) (0xc0006c3d60) Stream added, broadcasting: 1\nI0312 21:56:42.912214 2044 log.go:172] (0xc000102c60) Reply frame received for 1\nI0312 21:56:42.912238 2044 log.go:172] (0xc000102c60) (0xc0006c3e00) Create stream\nI0312 21:56:42.912245 2044 log.go:172] (0xc000102c60) (0xc0006c3e00) Stream added, broadcasting: 3\nI0312 21:56:42.912738 2044 log.go:172] (0xc000102c60) Reply frame received for 3\nI0312 21:56:42.912757 2044 log.go:172] (0xc000102c60) (0xc0007a6000) Create stream\nI0312 21:56:42.912765 2044 log.go:172] (0xc000102c60) (0xc0007a6000) Stream added, broadcasting: 5\nI0312 21:56:42.913283 2044 log.go:172] (0xc000102c60) Reply frame received for 5\nI0312 21:56:42.995253 2044 log.go:172] (0xc000102c60) Data frame received for 5\nI0312 21:56:42.995283 2044 log.go:172] (0xc0007a6000) (5) Data frame handling\nI0312 21:56:42.995293 2044 log.go:172] (0xc0007a6000) (5) Data frame sent\nI0312 21:56:42.995298 2044 log.go:172] (0xc000102c60) Data frame received for 5\nI0312 21:56:42.995303 2044 log.go:172] (0xc0007a6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 21:56:42.995317 2044 log.go:172] (0xc000102c60) Data frame received for 3\nI0312 21:56:42.995325 2044 log.go:172] (0xc0006c3e00) (3) Data frame handling\nI0312 21:56:42.995330 2044 log.go:172] (0xc0006c3e00) (3) Data frame sent\nI0312 21:56:42.995338 2044 log.go:172] (0xc000102c60) Data frame received for 3\nI0312 21:56:42.995349 2044 log.go:172] (0xc0006c3e00) (3) Data frame handling\nI0312 21:56:43.003510 2044 log.go:172] (0xc000102c60) Data frame received for 1\nI0312 21:56:43.003531 2044 log.go:172] (0xc0006c3d60) (1) Data frame handling\nI0312 21:56:43.003545 2044 log.go:172] (0xc0006c3d60) (1) Data frame sent\nI0312 21:56:43.003689 2044 log.go:172] (0xc000102c60) (0xc0006c3d60) Stream removed, broadcasting: 1\nI0312 21:56:43.003726 2044 log.go:172] (0xc000102c60) Go away received\nI0312 21:56:43.003888 2044 log.go:172] (0xc000102c60) (0xc0006c3d60) Stream removed, broadcasting: 1\nI0312 21:56:43.003899 2044 log.go:172] (0xc000102c60) (0xc0006c3e00) Stream removed, broadcasting: 3\nI0312 21:56:43.003905 2044 log.go:172] (0xc000102c60) (0xc0007a6000) Stream removed, broadcasting: 5\n" Mar 12 21:56:43.006: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 21:56:43.006: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 21:56:43.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 21:56:43.008: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 21:56:43.008: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 12 21:56:43.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:56:43.155: INFO: stderr: "I0312 21:56:43.094937 2064 log.go:172] (0xc00091a160) (0xc00066bea0) Create stream\nI0312 21:56:43.094964 2064 log.go:172] (0xc00091a160) (0xc00066bea0) Stream added, broadcasting: 1\nI0312 21:56:43.096389 2064 log.go:172] (0xc00091a160) Reply frame received for 1\nI0312 21:56:43.096409 2064 log.go:172] (0xc00091a160) (0xc00074b5e0) Create stream\nI0312 21:56:43.096417 2064 log.go:172] (0xc00091a160) (0xc00074b5e0) Stream added, broadcasting: 3\nI0312 21:56:43.096846 2064 log.go:172] (0xc00091a160) Reply frame received for 3\nI0312 21:56:43.096866 2064 log.go:172] (0xc00091a160) (0xc0009be000) Create stream\nI0312 21:56:43.096877 2064 log.go:172] (0xc00091a160) (0xc0009be000) Stream added, broadcasting: 5\nI0312 21:56:43.097408 2064 log.go:172] (0xc00091a160) Reply frame received for 5\nI0312 21:56:43.150909 2064 log.go:172] (0xc00091a160) Data frame received for 3\nI0312 21:56:43.150954 2064 log.go:172] (0xc00074b5e0) (3) Data frame handling\nI0312 21:56:43.150967 2064 log.go:172] (0xc00074b5e0) (3) Data frame sent\nI0312 21:56:43.150972 2064 log.go:172] (0xc00091a160) Data frame received for 3\nI0312 21:56:43.150976 2064 log.go:172] (0xc00074b5e0) (3) Data frame handling\nI0312 21:56:43.150995 2064 log.go:172] (0xc00091a160) Data frame received for 5\nI0312 21:56:43.151000 2064 log.go:172] (0xc0009be000) (5) Data frame handling\nI0312 21:56:43.151005 2064 log.go:172] (0xc0009be000) (5) Data frame sent\nI0312 21:56:43.151010 2064 log.go:172] (0xc00091a160) Data frame received for 5\nI0312 21:56:43.151014 2064 log.go:172] (0xc0009be000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:56:43.152408 2064 log.go:172] (0xc00091a160) Data frame received for 1\nI0312 21:56:43.152421 2064 log.go:172] (0xc00066bea0) (1) Data frame handling\nI0312 21:56:43.152433 2064 log.go:172] (0xc00066bea0) (1) Data frame sent\nI0312 21:56:43.152442 2064 log.go:172] (0xc00091a160) (0xc00066bea0) Stream removed, broadcasting: 1\nI0312 21:56:43.152453 2064 log.go:172] (0xc00091a160) Go away received\nI0312 21:56:43.152710 2064 log.go:172] (0xc00091a160) (0xc00066bea0) Stream removed, broadcasting: 1\nI0312 21:56:43.152724 2064 log.go:172] (0xc00091a160) (0xc00074b5e0) Stream removed, broadcasting: 3\nI0312 21:56:43.152730 2064 log.go:172] (0xc00091a160) (0xc0009be000) Stream removed, broadcasting: 5\n" Mar 12 21:56:43.155: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:56:43.155: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:56:43.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:56:43.331: INFO: stderr: "I0312 21:56:43.241896 2086 log.go:172] (0xc000a262c0) (0xc0009ea140) Create stream\nI0312 21:56:43.241934 2086 log.go:172] (0xc000a262c0) (0xc0009ea140) Stream added, broadcasting: 1\nI0312 21:56:43.244533 2086 log.go:172] (0xc000a262c0) Reply frame received for 1\nI0312 21:56:43.244557 2086 log.go:172] (0xc000a262c0) (0xc000568780) Create stream\nI0312 21:56:43.244563 2086 log.go:172] (0xc000a262c0) (0xc000568780) Stream added, broadcasting: 3\nI0312 21:56:43.245026 2086 log.go:172] (0xc000a262c0) Reply frame received for 3\nI0312 21:56:43.245047 2086 log.go:172] (0xc000a262c0) (0xc000233540) Create stream\nI0312 21:56:43.245055 2086 log.go:172] (0xc000a262c0) (0xc000233540) Stream added, broadcasting: 5\nI0312 21:56:43.245496 2086 log.go:172] (0xc000a262c0) Reply frame received for 5\nI0312 21:56:43.302913 2086 log.go:172] (0xc000a262c0) Data frame received for 5\nI0312 21:56:43.302933 2086 log.go:172] (0xc000233540) (5) Data frame handling\nI0312 21:56:43.302944 2086 log.go:172] (0xc000233540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:56:43.327647 2086 log.go:172] (0xc000a262c0) Data frame received for 3\nI0312 21:56:43.327661 2086 log.go:172] (0xc000568780) (3) Data frame handling\nI0312 21:56:43.327669 2086 log.go:172] (0xc000568780) (3) Data frame sent\nI0312 21:56:43.327740 2086 log.go:172] (0xc000a262c0) Data frame received for 5\nI0312 21:56:43.327757 2086 log.go:172] (0xc000233540) (5) Data frame handling\nI0312 21:56:43.327769 2086 log.go:172] (0xc000a262c0) Data frame received for 3\nI0312 21:56:43.327773 2086 log.go:172] (0xc000568780) (3) Data frame handling\nI0312 21:56:43.328897 2086 log.go:172] (0xc000a262c0) Data frame received for 1\nI0312 21:56:43.328905 2086 log.go:172] (0xc0009ea140) (1) Data frame handling\nI0312 21:56:43.328911 2086 log.go:172] (0xc0009ea140) (1) Data frame sent\nI0312 21:56:43.328919 2086 log.go:172] (0xc000a262c0) (0xc0009ea140) Stream removed, broadcasting: 1\nI0312 21:56:43.328948 2086 log.go:172] (0xc000a262c0) Go away received\nI0312 21:56:43.329137 2086 log.go:172] (0xc000a262c0) (0xc0009ea140) Stream removed, broadcasting: 1\nI0312 21:56:43.329147 2086 log.go:172] (0xc000a262c0) (0xc000568780) Stream removed, broadcasting: 3\nI0312 21:56:43.329152 2086 log.go:172] (0xc000a262c0) (0xc000233540) Stream removed, broadcasting: 5\n" Mar 12 21:56:43.331: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:56:43.331: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:56:43.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 21:56:43.496: INFO: stderr: "I0312 21:56:43.419214 2106 log.go:172] (0xc000586a50) (0xc000964000) Create stream\nI0312 21:56:43.419262 2106 log.go:172] (0xc000586a50) (0xc000964000) Stream added, broadcasting: 1\nI0312 21:56:43.421078 2106 log.go:172] (0xc000586a50) Reply frame received for 1\nI0312 21:56:43.421099 2106 log.go:172] (0xc000586a50) (0xc0009640a0) Create stream\nI0312 21:56:43.421104 2106 log.go:172] (0xc000586a50) (0xc0009640a0) Stream added, broadcasting: 3\nI0312 21:56:43.421754 2106 log.go:172] (0xc000586a50) Reply frame received for 3\nI0312 21:56:43.421773 2106 log.go:172] (0xc000586a50) (0xc000964140) Create stream\nI0312 21:56:43.421778 2106 log.go:172] (0xc000586a50) (0xc000964140) Stream added, broadcasting: 5\nI0312 21:56:43.422459 2106 log.go:172] (0xc000586a50) Reply frame received for 5\nI0312 21:56:43.471161 2106 log.go:172] (0xc000586a50) Data frame received for 5\nI0312 21:56:43.471178 2106 log.go:172] (0xc000964140) (5) Data frame handling\nI0312 21:56:43.471188 2106 log.go:172] (0xc000964140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 21:56:43.492957 2106 log.go:172] (0xc000586a50) Data frame received for 5\nI0312 21:56:43.492984 2106 log.go:172] (0xc000964140) (5) Data frame handling\nI0312 21:56:43.492999 2106 log.go:172] (0xc000586a50) Data frame received for 3\nI0312 21:56:43.493003 2106 log.go:172] (0xc0009640a0) (3) Data frame handling\nI0312 21:56:43.493009 2106 log.go:172] (0xc0009640a0) (3) Data frame sent\nI0312 21:56:43.493014 2106 log.go:172] (0xc000586a50) Data frame received for 3\nI0312 21:56:43.493017 2106 log.go:172] (0xc0009640a0) (3) Data frame handling\nI0312 21:56:43.494355 2106 log.go:172] (0xc000586a50) Data frame received for 1\nI0312 21:56:43.494374 2106 log.go:172] (0xc000964000) (1) Data frame handling\nI0312 21:56:43.494385 2106 log.go:172] (0xc000964000) (1) Data frame sent\nI0312 21:56:43.494393 2106 log.go:172] (0xc000586a50) (0xc000964000) Stream removed, broadcasting: 1\nI0312 21:56:43.494401 2106 log.go:172] (0xc000586a50) Go away received\nI0312 21:56:43.494702 2106 log.go:172] (0xc000586a50) (0xc000964000) Stream removed, broadcasting: 1\nI0312 21:56:43.494715 2106 log.go:172] (0xc000586a50) (0xc0009640a0) Stream removed, broadcasting: 3\nI0312 21:56:43.494721 2106 log.go:172] (0xc000586a50) (0xc000964140) Stream removed, broadcasting: 5\n" Mar 12 21:56:43.497: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 21:56:43.497: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 21:56:43.497: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 21:56:43.499: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 12 21:56:53.505: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:56:53.506: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:56:53.506: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 21:56:53.520: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:53.520: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:53.520: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:53.520: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:53.520: INFO: Mar 12 21:56:53.520: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:54.524: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:54.524: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:54.524: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:54.524: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:54.524: INFO: Mar 12 21:56:54.524: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:55.528: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:55.528: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:55.528: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:55.528: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:55.528: INFO: Mar 12 21:56:55.528: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:56.532: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:56.532: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:56.532: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:56.532: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:56.532: INFO: Mar 12 21:56:56.532: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:57.537: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:57.537: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:57.537: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:57.537: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:57.537: INFO: Mar 12 21:56:57.537: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:58.542: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:58.542: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:58.542: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:58.542: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:58.542: INFO: Mar 12 21:56:58.542: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:56:59.546: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:56:59.546: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:56:59.546: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:59.546: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:56:59.546: INFO: Mar 12 21:56:59.546: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:57:00.551: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:57:00.551: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:57:00.551: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:00.551: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:00.551: INFO: Mar 12 21:57:00.551: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:57:01.556: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:57:01.556: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:57:01.556: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:01.556: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:01.556: INFO: Mar 12 21:57:01.556: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 21:57:02.558: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 21:57:02.558: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:12 +0000 UTC }] Mar 12 21:57:02.559: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:02.559: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 21:56:32 +0000 UTC }] Mar 12 21:57:02.559: INFO: Mar 12 21:57:02.559: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2302 Mar 12 21:57:03.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:03.701: INFO: rc: 1 Mar 12 21:57:03.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 12 21:57:13.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:15.445: INFO: rc: 1 Mar 12 21:57:15.445: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:57:25.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:25.567: INFO: rc: 1 Mar 12 21:57:25.567: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:57:35.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:35.692: INFO: rc: 1 Mar 12 21:57:35.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:57:45.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:45.800: INFO: rc: 1 Mar 12 21:57:45.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:57:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:57:55.908: INFO: rc: 1 Mar 12 21:57:55.908: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:05.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:06.019: INFO: rc: 1 Mar 12 21:58:06.019: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:16.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:16.112: INFO: rc: 1 Mar 12 21:58:16.112: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:26.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:26.219: INFO: rc: 1 Mar 12 21:58:26.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:36.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:36.337: INFO: rc: 1 Mar 12 21:58:36.337: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:46.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:46.424: INFO: rc: 1 Mar 12 21:58:46.424: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:58:56.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:58:56.511: INFO: rc: 1 Mar 12 21:58:56.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:06.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:06.593: INFO: rc: 1 Mar 12 21:59:06.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:16.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:16.666: INFO: rc: 1 Mar 12 21:59:16.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:26.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:26.753: INFO: rc: 1 Mar 12 21:59:26.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:36.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:36.863: INFO: rc: 1 Mar 12 21:59:36.863: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:46.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:46.988: INFO: rc: 1 Mar 12 21:59:46.988: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 21:59:56.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 21:59:57.096: INFO: rc: 1 Mar 12 21:59:57.096: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:07.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:07.220: INFO: rc: 1 Mar 12 22:00:07.220: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:17.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:17.308: INFO: rc: 1 Mar 12 22:00:17.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:27.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:27.430: INFO: rc: 1 Mar 12 22:00:27.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:37.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:37.537: INFO: rc: 1 Mar 12 22:00:37.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:47.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:47.664: INFO: rc: 1 Mar 12 22:00:47.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:00:57.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:00:57.787: INFO: rc: 1 Mar 12 22:00:57.787: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:07.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:07.910: INFO: rc: 1 Mar 12 22:01:07.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:17.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:17.981: INFO: rc: 1 Mar 12 22:01:17.981: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:27.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:28.108: INFO: rc: 1 Mar 12 22:01:28.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:38.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:38.195: INFO: rc: 1 Mar 12 22:01:38.195: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:48.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:48.303: INFO: rc: 1 Mar 12 22:01:48.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:01:58.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:01:58.397: INFO: rc: 1 Mar 12 22:01:58.397: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 12 22:02:08.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2302 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:02:08.502: INFO: rc: 1 Mar 12 22:02:08.502: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 12 22:02:08.502: INFO: Scaling statefulset ss to 0 Mar 12 22:02:08.509: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 22:02:08.511: INFO: Deleting all statefulset in ns statefulset-2302 Mar 12 22:02:08.513: INFO: Scaling statefulset ss to 0 Mar 12 22:02:08.519: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 22:02:08.520: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:08.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2302" for this suite. • [SLOW TEST:356.460 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":187,"skipped":2965,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:08.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:24.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8143" for this suite. • [SLOW TEST:16.108 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":188,"skipped":2979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:24.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 22:02:24.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5014' Mar 12 22:02:24.813: INFO: stderr: "" Mar 12 22:02:24.813: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 12 22:02:29.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5014 -o json' Mar 12 22:02:29.971: INFO: stderr: "" Mar 12 22:02:29.971: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-12T22:02:24Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5014\",\n \"resourceVersion\": \"1252531\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5014/pods/e2e-test-httpd-pod\",\n \"uid\": \"27a83953-506b-4b22-a371-3edcbca32df8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2qj4l\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2qj4l\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2qj4l\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T22:02:24Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T22:02:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T22:02:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T22:02:24Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://562e7f84cce39829bd683ca83e5b48e8cde72fd1919241fe80a08292f3f12607\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-12T22:02:25Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.233\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.233\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-12T22:02:24Z\"\n }\n}\n" STEP: replace the image in the pod Mar 12 22:02:29.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5014' Mar 12 22:02:30.255: INFO: stderr: "" Mar 12 22:02:30.255: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Mar 12 22:02:30.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5014' Mar 12 22:02:32.367: INFO: stderr: "" Mar 12 22:02:32.367: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:32.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5014" for this suite. • [SLOW TEST:7.727 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":189,"skipped":3016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:32.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6952e540-8c1e-4091-ac2a-5fcff8d6e1af STEP: Creating a pod to test consume secrets Mar 12 22:02:32.451: INFO: Waiting up to 5m0s for pod "pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee" in namespace "secrets-5977" to be "success or failure" Mar 12 22:02:32.460: INFO: Pod "pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579871ms Mar 12 22:02:34.462: INFO: Pod "pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011446739s STEP: Saw pod success Mar 12 22:02:34.462: INFO: Pod "pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee" satisfied condition "success or failure" Mar 12 22:02:34.464: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee container secret-volume-test: STEP: delete the pod Mar 12 22:02:34.523: INFO: Waiting for pod pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee to disappear Mar 12 22:02:34.528: INFO: Pod pod-secrets-8f4d91f8-e84e-400e-a804-acb9821046ee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5977" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3081,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:34.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1356 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 22:02:34.564: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 12 22:02:56.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.2.234&port=8081&tries=1'] Namespace:pod-network-test-1356 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:02:56.691: INFO: >>> kubeConfig: /root/.kube/config I0312 22:02:56.727424 6 log.go:172] (0xc00223c580) (0xc000aec280) Create stream I0312 22:02:56.727462 6 log.go:172] (0xc00223c580) (0xc000aec280) Stream added, broadcasting: 1 I0312 22:02:56.730494 6 log.go:172] (0xc00223c580) Reply frame received for 1 I0312 22:02:56.730549 6 log.go:172] (0xc00223c580) (0xc000ada3c0) Create stream I0312 22:02:56.730571 6 log.go:172] (0xc00223c580) (0xc000ada3c0) Stream added, broadcasting: 3 I0312 22:02:56.731867 6 log.go:172] (0xc00223c580) Reply frame received for 3 I0312 22:02:56.731906 6 log.go:172] (0xc00223c580) (0xc0016f8000) Create stream I0312 22:02:56.731922 6 log.go:172] (0xc00223c580) (0xc0016f8000) Stream added, broadcasting: 5 I0312 22:02:56.732809 6 log.go:172] (0xc00223c580) Reply frame received for 5 I0312 22:02:56.797790 6 log.go:172] (0xc00223c580) Data frame received for 3 I0312 22:02:56.797832 6 log.go:172] (0xc000ada3c0) (3) Data frame handling I0312 22:02:56.797868 6 log.go:172] (0xc000ada3c0) (3) Data frame sent I0312 22:02:56.798466 6 log.go:172] (0xc00223c580) Data frame received for 3 I0312 22:02:56.798498 6 log.go:172] (0xc000ada3c0) (3) Data frame handling I0312 22:02:56.798569 6 log.go:172] (0xc00223c580) Data frame received for 5 I0312 22:02:56.798588 6 log.go:172] (0xc0016f8000) (5) Data frame handling I0312 22:02:56.800364 6 log.go:172] (0xc00223c580) Data frame received for 1 I0312 22:02:56.800390 6 log.go:172] (0xc000aec280) (1) Data frame handling I0312 22:02:56.800407 6 log.go:172] (0xc000aec280) (1) Data frame sent I0312 22:02:56.800429 6 log.go:172] (0xc00223c580) (0xc000aec280) Stream removed, broadcasting: 1 I0312 22:02:56.800449 6 log.go:172] (0xc00223c580) Go away received I0312 22:02:56.800581 6 log.go:172] (0xc00223c580) (0xc000aec280) Stream removed, broadcasting: 1 I0312 22:02:56.800609 6 log.go:172] (0xc00223c580) (0xc000ada3c0) Stream removed, broadcasting: 3 I0312 22:02:56.800622 6 log.go:172] (0xc00223c580) (0xc0016f8000) Stream removed, broadcasting: 5 Mar 12 22:02:56.800: INFO: Waiting for responses: map[] Mar 12 22:02:56.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.1.235&port=8081&tries=1'] Namespace:pod-network-test-1356 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:02:56.814: INFO: >>> kubeConfig: /root/.kube/config I0312 22:02:56.855604 6 log.go:172] (0xc001eea420) (0xc000adaa00) Create stream I0312 22:02:56.855629 6 log.go:172] (0xc001eea420) (0xc000adaa00) Stream added, broadcasting: 1 I0312 22:02:56.858085 6 log.go:172] (0xc001eea420) Reply frame received for 1 I0312 22:02:56.858148 6 log.go:172] (0xc001eea420) (0xc00096db80) Create stream I0312 22:02:56.858162 6 log.go:172] (0xc001eea420) (0xc00096db80) Stream added, broadcasting: 3 I0312 22:02:56.859143 6 log.go:172] (0xc001eea420) Reply frame received for 3 I0312 22:02:56.859186 6 log.go:172] (0xc001eea420) (0xc000adaaa0) Create stream I0312 22:02:56.859197 6 log.go:172] (0xc001eea420) (0xc000adaaa0) Stream added, broadcasting: 5 I0312 22:02:56.860093 6 log.go:172] (0xc001eea420) Reply frame received for 5 I0312 22:02:56.925608 6 log.go:172] (0xc001eea420) Data frame received for 3 I0312 22:02:56.925627 6 log.go:172] (0xc00096db80) (3) Data frame handling I0312 22:02:56.925642 6 log.go:172] (0xc00096db80) (3) Data frame sent I0312 22:02:56.926101 6 log.go:172] (0xc001eea420) Data frame received for 3 I0312 22:02:56.926147 6 log.go:172] (0xc001eea420) Data frame received for 5 I0312 22:02:56.926172 6 log.go:172] (0xc000adaaa0) (5) Data frame handling I0312 22:02:56.926197 6 log.go:172] (0xc00096db80) (3) Data frame handling I0312 22:02:56.927265 6 log.go:172] (0xc001eea420) Data frame received for 1 I0312 22:02:56.927283 6 log.go:172] (0xc000adaa00) (1) Data frame handling I0312 22:02:56.927301 6 log.go:172] (0xc000adaa00) (1) Data frame sent I0312 22:02:56.927314 6 log.go:172] (0xc001eea420) (0xc000adaa00) Stream removed, broadcasting: 1 I0312 22:02:56.927385 6 log.go:172] (0xc001eea420) Go away received I0312 22:02:56.927548 6 log.go:172] (0xc001eea420) (0xc000adaa00) Stream removed, broadcasting: 1 I0312 22:02:56.927592 6 log.go:172] (0xc001eea420) (0xc00096db80) Stream removed, broadcasting: 3 I0312 22:02:56.927618 6 log.go:172] (0xc001eea420) (0xc000adaaa0) Stream removed, broadcasting: 5 Mar 12 22:02:56.927: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:56.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1356" for this suite. • [SLOW TEST:22.403 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:56.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-9cf7345f-cf57-46ff-93f1-8b46d332a9e0 STEP: Creating a pod to test consume secrets Mar 12 22:02:57.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12" in namespace "projected-6318" to be "success or failure" Mar 12 22:02:57.053: INFO: Pod "pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104744ms Mar 12 22:02:59.058: INFO: Pod "pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010628624s STEP: Saw pod success Mar 12 22:02:59.058: INFO: Pod "pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12" satisfied condition "success or failure" Mar 12 22:02:59.061: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12 container projected-secret-volume-test: STEP: delete the pod Mar 12 22:02:59.119: INFO: Waiting for pod pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12 to disappear Mar 12 22:02:59.125: INFO: Pod pod-projected-secrets-3db01f59-00fe-483b-be17-9a600ac43b12 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:02:59.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6318" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:02:59.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 22:03:09.291575 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 22:03:09.291: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6906" for this suite. • [SLOW TEST:10.162 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":193,"skipped":3173,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:09.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:03:09.359: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-235aae3f-f544-4ffb-ad7b-36c03948c8ca" in namespace "security-context-test-4495" to be "success or failure" Mar 12 22:03:09.362: INFO: Pod "alpine-nnp-false-235aae3f-f544-4ffb-ad7b-36c03948c8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902682ms Mar 12 22:03:11.364: INFO: Pod "alpine-nnp-false-235aae3f-f544-4ffb-ad7b-36c03948c8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005507472s Mar 12 22:03:11.364: INFO: Pod "alpine-nnp-false-235aae3f-f544-4ffb-ad7b-36c03948c8ca" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:11.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4495" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:11.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3493 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3493 Mar 12 22:03:11.450: INFO: Found 0 stateful pods, waiting for 1 Mar 12 22:03:21.453: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 22:03:21.474: INFO: Deleting all statefulset in ns statefulset-3493 Mar 12 22:03:21.480: INFO: Scaling statefulset ss to 0 Mar 12 22:03:41.549: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 22:03:41.552: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:41.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3493" for this suite. • [SLOW TEST:30.196 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":195,"skipped":3196,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:41.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 12 22:03:41.619: INFO: Waiting up to 5m0s for pod "pod-b631cc90-edeb-4542-ad06-3e29629883d6" in namespace "emptydir-5853" to be "success or failure" Mar 12 22:03:41.622: INFO: Pod "pod-b631cc90-edeb-4542-ad06-3e29629883d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620094ms Mar 12 22:03:43.625: INFO: Pod "pod-b631cc90-edeb-4542-ad06-3e29629883d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006367931s STEP: Saw pod success Mar 12 22:03:43.625: INFO: Pod "pod-b631cc90-edeb-4542-ad06-3e29629883d6" satisfied condition "success or failure" Mar 12 22:03:43.628: INFO: Trying to get logs from node jerma-worker pod pod-b631cc90-edeb-4542-ad06-3e29629883d6 container test-container: STEP: delete the pod Mar 12 22:03:43.659: INFO: Waiting for pod pod-b631cc90-edeb-4542-ad06-3e29629883d6 to disappear Mar 12 22:03:43.665: INFO: Pod pod-b631cc90-edeb-4542-ad06-3e29629883d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:43.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5853" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:43.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 22:03:47.844: INFO: DNS probes using dns-5939/dns-test-0588dfd9-3fa8-4928-80ad-ea9da6bfde2a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:47.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5939" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":197,"skipped":3269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:47.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-c7814cce-7cac-4339-8a18-99a23ad217bb STEP: Creating a pod to test consume configMaps Mar 12 22:03:48.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98" in namespace "configmap-6102" to be "success or failure" Mar 12 22:03:48.085: INFO: Pod "pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98": Phase="Pending", Reason="", readiness=false. Elapsed: 5.907632ms Mar 12 22:03:50.089: INFO: Pod "pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00918576s Mar 12 22:03:52.092: INFO: Pod "pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012765297s STEP: Saw pod success Mar 12 22:03:52.092: INFO: Pod "pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98" satisfied condition "success or failure" Mar 12 22:03:52.094: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98 container configmap-volume-test: STEP: delete the pod Mar 12 22:03:52.125: INFO: Waiting for pod pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98 to disappear Mar 12 22:03:52.129: INFO: Pod pod-configmaps-41071d33-440c-4050-93bb-bc4f3cc61d98 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:52.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6102" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3299,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:03:52.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb" in namespace "projected-8658" to be "success or failure" Mar 12 22:03:52.201: INFO: Pod "downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.343558ms Mar 12 22:03:54.204: INFO: Pod "downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006649912s STEP: Saw pod success Mar 12 22:03:54.205: INFO: Pod "downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb" satisfied condition "success or failure" Mar 12 22:03:54.207: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb container client-container: STEP: delete the pod Mar 12 22:03:54.256: INFO: Waiting for pod downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb to disappear Mar 12 22:03:54.267: INFO: Pod downwardapi-volume-82aa5919-13a8-401d-8309-3bac90d628cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:54.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8658" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3314,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:54.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-27a2dedf-c24e-4fff-a11d-ae2263468fc8 STEP: Creating secret with name secret-projected-all-test-volume-a4caadaa-08cb-4cda-ba30-7267eb9b85e4 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 12 22:03:54.334: INFO: Waiting up to 5m0s for pod "projected-volume-11c45013-1803-41eb-960f-2365e077b463" in namespace "projected-6286" to be "success or failure" Mar 12 22:03:54.339: INFO: Pod "projected-volume-11c45013-1803-41eb-960f-2365e077b463": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119798ms Mar 12 22:03:56.341: INFO: Pod "projected-volume-11c45013-1803-41eb-960f-2365e077b463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006920553s STEP: Saw pod success Mar 12 22:03:56.341: INFO: Pod "projected-volume-11c45013-1803-41eb-960f-2365e077b463" satisfied condition "success or failure" Mar 12 22:03:56.343: INFO: Trying to get logs from node jerma-worker pod projected-volume-11c45013-1803-41eb-960f-2365e077b463 container projected-all-volume-test: STEP: delete the pod Mar 12 22:03:56.383: INFO: Waiting for pod projected-volume-11c45013-1803-41eb-960f-2365e077b463 to disappear Mar 12 22:03:56.390: INFO: Pod projected-volume-11c45013-1803-41eb-960f-2365e077b463 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:03:56.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6286" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3319,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:03:56.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 12 22:04:00.965: INFO: Successfully updated pod "adopt-release-7fnz5" STEP: Checking that the Job readopts the Pod Mar 12 22:04:00.965: INFO: Waiting up to 15m0s for pod "adopt-release-7fnz5" in namespace "job-5436" to be "adopted" Mar 12 22:04:00.972: INFO: Pod "adopt-release-7fnz5": Phase="Running", Reason="", readiness=true. Elapsed: 6.968523ms Mar 12 22:04:02.976: INFO: Pod "adopt-release-7fnz5": Phase="Running", Reason="", readiness=true. Elapsed: 2.011056663s Mar 12 22:04:02.976: INFO: Pod "adopt-release-7fnz5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 12 22:04:03.483: INFO: Successfully updated pod "adopt-release-7fnz5" STEP: Checking that the Job releases the Pod Mar 12 22:04:03.483: INFO: Waiting up to 15m0s for pod "adopt-release-7fnz5" in namespace "job-5436" to be "released" Mar 12 22:04:03.513: INFO: Pod "adopt-release-7fnz5": Phase="Running", Reason="", readiness=true. Elapsed: 29.470573ms Mar 12 22:04:05.517: INFO: Pod "adopt-release-7fnz5": Phase="Running", Reason="", readiness=true. Elapsed: 2.03336981s Mar 12 22:04:05.517: INFO: Pod "adopt-release-7fnz5" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:04:05.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5436" for this suite. • [SLOW TEST:9.128 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":201,"skipped":3327,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:04:05.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9712 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 12 22:04:05.633: INFO: Found 0 stateful pods, waiting for 3 Mar 12 22:04:15.637: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:04:15.637: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:04:15.637: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:04:15.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9712 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 22:04:15.871: INFO: stderr: "I0312 22:04:15.782807 2818 log.go:172] (0xc0003080b0) (0xc0003bb5e0) Create stream\nI0312 22:04:15.782873 2818 log.go:172] (0xc0003080b0) (0xc0003bb5e0) Stream added, broadcasting: 1\nI0312 22:04:15.785263 2818 log.go:172] (0xc0003080b0) Reply frame received for 1\nI0312 22:04:15.785304 2818 log.go:172] (0xc0003080b0) (0xc0002a8000) Create stream\nI0312 22:04:15.785317 2818 log.go:172] (0xc0003080b0) (0xc0002a8000) Stream added, broadcasting: 3\nI0312 22:04:15.786396 2818 log.go:172] (0xc0003080b0) Reply frame received for 3\nI0312 22:04:15.786433 2818 log.go:172] (0xc0003080b0) (0xc00066bb80) Create stream\nI0312 22:04:15.786444 2818 log.go:172] (0xc0003080b0) (0xc00066bb80) Stream added, broadcasting: 5\nI0312 22:04:15.787777 2818 log.go:172] (0xc0003080b0) Reply frame received for 5\nI0312 22:04:15.849779 2818 log.go:172] (0xc0003080b0) Data frame received for 5\nI0312 22:04:15.849806 2818 log.go:172] (0xc00066bb80) (5) Data frame handling\nI0312 22:04:15.849822 2818 log.go:172] (0xc00066bb80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 22:04:15.865164 2818 log.go:172] (0xc0003080b0) Data frame received for 5\nI0312 22:04:15.865181 2818 log.go:172] (0xc00066bb80) (5) Data frame handling\nI0312 22:04:15.865214 2818 log.go:172] (0xc0003080b0) Data frame received for 3\nI0312 22:04:15.865303 2818 log.go:172] (0xc0002a8000) (3) Data frame handling\nI0312 22:04:15.865328 2818 log.go:172] (0xc0002a8000) (3) Data frame sent\nI0312 22:04:15.865337 2818 log.go:172] (0xc0003080b0) Data frame received for 3\nI0312 22:04:15.865344 2818 log.go:172] (0xc0002a8000) (3) Data frame handling\nI0312 22:04:15.867043 2818 log.go:172] (0xc0003080b0) Data frame received for 1\nI0312 22:04:15.867068 2818 log.go:172] (0xc0003bb5e0) (1) Data frame handling\nI0312 22:04:15.867080 2818 log.go:172] (0xc0003bb5e0) (1) Data frame sent\nI0312 22:04:15.867096 2818 log.go:172] (0xc0003080b0) (0xc0003bb5e0) Stream removed, broadcasting: 1\nI0312 22:04:15.867142 2818 log.go:172] (0xc0003080b0) Go away received\nI0312 22:04:15.867408 2818 log.go:172] (0xc0003080b0) (0xc0003bb5e0) Stream removed, broadcasting: 1\nI0312 22:04:15.867421 2818 log.go:172] (0xc0003080b0) (0xc0002a8000) Stream removed, broadcasting: 3\nI0312 22:04:15.867428 2818 log.go:172] (0xc0003080b0) (0xc00066bb80) Stream removed, broadcasting: 5\n" Mar 12 22:04:15.871: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 22:04:15.871: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 22:04:25.902: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 12 22:04:35.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9712 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:04:36.199: INFO: stderr: "I0312 22:04:36.115617 2840 log.go:172] (0xc0000f42c0) (0xc00060e780) Create stream\nI0312 22:04:36.115667 2840 log.go:172] (0xc0000f42c0) (0xc00060e780) Stream added, broadcasting: 1\nI0312 22:04:36.117897 2840 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0312 22:04:36.117924 2840 log.go:172] (0xc0000f42c0) (0xc000755540) Create stream\nI0312 22:04:36.117930 2840 log.go:172] (0xc0000f42c0) (0xc000755540) Stream added, broadcasting: 3\nI0312 22:04:36.118840 2840 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0312 22:04:36.118886 2840 log.go:172] (0xc0000f42c0) (0xc0006b5cc0) Create stream\nI0312 22:04:36.118899 2840 log.go:172] (0xc0000f42c0) (0xc0006b5cc0) Stream added, broadcasting: 5\nI0312 22:04:36.119778 2840 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0312 22:04:36.193024 2840 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0312 22:04:36.193053 2840 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0312 22:04:36.193071 2840 log.go:172] (0xc0006b5cc0) (5) Data frame handling\nI0312 22:04:36.193082 2840 log.go:172] (0xc0006b5cc0) (5) Data frame sent\nI0312 22:04:36.193091 2840 log.go:172] (0xc0000f42c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 22:04:36.193108 2840 log.go:172] (0xc0006b5cc0) (5) Data frame handling\nI0312 22:04:36.193130 2840 log.go:172] (0xc000755540) (3) Data frame handling\nI0312 22:04:36.193142 2840 log.go:172] (0xc000755540) (3) Data frame sent\nI0312 22:04:36.193153 2840 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0312 22:04:36.193163 2840 log.go:172] (0xc000755540) (3) Data frame handling\nI0312 22:04:36.194686 2840 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0312 22:04:36.194701 2840 log.go:172] (0xc00060e780) (1) Data frame handling\nI0312 22:04:36.194715 2840 log.go:172] (0xc00060e780) (1) Data frame sent\nI0312 22:04:36.194726 2840 log.go:172] (0xc0000f42c0) (0xc00060e780) Stream removed, broadcasting: 1\nI0312 22:04:36.194969 2840 log.go:172] (0xc0000f42c0) (0xc00060e780) Stream removed, broadcasting: 1\nI0312 22:04:36.194983 2840 log.go:172] (0xc0000f42c0) (0xc000755540) Stream removed, broadcasting: 3\nI0312 22:04:36.194990 2840 log.go:172] (0xc0000f42c0) (0xc0006b5cc0) Stream removed, broadcasting: 5\n" Mar 12 22:04:36.199: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 22:04:36.199: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 22:04:46.246: INFO: Waiting for StatefulSet statefulset-9712/ss2 to complete update Mar 12 22:04:46.246: INFO: Waiting for Pod statefulset-9712/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 12 22:04:56.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9712 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 22:04:56.468: INFO: stderr: "I0312 22:04:56.379797 2860 log.go:172] (0xc000a7c840) (0xc000a8a3c0) Create stream\nI0312 22:04:56.379952 2860 log.go:172] (0xc000a7c840) (0xc000a8a3c0) Stream added, broadcasting: 1\nI0312 22:04:56.382501 2860 log.go:172] (0xc000a7c840) Reply frame received for 1\nI0312 22:04:56.382545 2860 log.go:172] (0xc000a7c840) (0xc0007fa780) Create stream\nI0312 22:04:56.382562 2860 log.go:172] (0xc000a7c840) (0xc0007fa780) Stream added, broadcasting: 3\nI0312 22:04:56.383096 2860 log.go:172] (0xc000a7c840) Reply frame received for 3\nI0312 22:04:56.383116 2860 log.go:172] (0xc000a7c840) (0xc000511540) Create stream\nI0312 22:04:56.383124 2860 log.go:172] (0xc000a7c840) (0xc000511540) Stream added, broadcasting: 5\nI0312 22:04:56.383840 2860 log.go:172] (0xc000a7c840) Reply frame received for 5\nI0312 22:04:56.444323 2860 log.go:172] (0xc000a7c840) Data frame received for 5\nI0312 22:04:56.444342 2860 log.go:172] (0xc000511540) (5) Data frame handling\nI0312 22:04:56.444353 2860 log.go:172] (0xc000511540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 22:04:56.464083 2860 log.go:172] (0xc000a7c840) Data frame received for 5\nI0312 22:04:56.464124 2860 log.go:172] (0xc000511540) (5) Data frame handling\nI0312 22:04:56.464154 2860 log.go:172] (0xc000a7c840) Data frame received for 3\nI0312 22:04:56.464170 2860 log.go:172] (0xc0007fa780) (3) Data frame handling\nI0312 22:04:56.464188 2860 log.go:172] (0xc0007fa780) (3) Data frame sent\nI0312 22:04:56.464202 2860 log.go:172] (0xc000a7c840) Data frame received for 3\nI0312 22:04:56.464218 2860 log.go:172] (0xc0007fa780) (3) Data frame handling\nI0312 22:04:56.465413 2860 log.go:172] (0xc000a7c840) Data frame received for 1\nI0312 22:04:56.465433 2860 log.go:172] (0xc000a8a3c0) (1) Data frame handling\nI0312 22:04:56.465453 2860 log.go:172] (0xc000a8a3c0) (1) Data frame sent\nI0312 22:04:56.465485 2860 log.go:172] (0xc000a7c840) (0xc000a8a3c0) Stream removed, broadcasting: 1\nI0312 22:04:56.465510 2860 log.go:172] (0xc000a7c840) Go away received\nI0312 22:04:56.465805 2860 log.go:172] (0xc000a7c840) (0xc000a8a3c0) Stream removed, broadcasting: 1\nI0312 22:04:56.465826 2860 log.go:172] (0xc000a7c840) (0xc0007fa780) Stream removed, broadcasting: 3\nI0312 22:04:56.465838 2860 log.go:172] (0xc000a7c840) (0xc000511540) Stream removed, broadcasting: 5\n" Mar 12 22:04:56.469: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 22:04:56.469: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 22:05:06.497: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 12 22:05:16.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9712 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 22:05:16.694: INFO: stderr: "I0312 22:05:16.630684 2881 log.go:172] (0xc000a351e0) (0xc000b04780) Create stream\nI0312 22:05:16.630720 2881 log.go:172] (0xc000a351e0) (0xc000b04780) Stream added, broadcasting: 1\nI0312 22:05:16.632232 2881 log.go:172] (0xc000a351e0) Reply frame received for 1\nI0312 22:05:16.632256 2881 log.go:172] (0xc000a351e0) (0xc00094c3c0) Create stream\nI0312 22:05:16.632266 2881 log.go:172] (0xc000a351e0) (0xc00094c3c0) Stream added, broadcasting: 3\nI0312 22:05:16.632844 2881 log.go:172] (0xc000a351e0) Reply frame received for 3\nI0312 22:05:16.632859 2881 log.go:172] (0xc000a351e0) (0xc0009540a0) Create stream\nI0312 22:05:16.632865 2881 log.go:172] (0xc000a351e0) (0xc0009540a0) Stream added, broadcasting: 5\nI0312 22:05:16.633598 2881 log.go:172] (0xc000a351e0) Reply frame received for 5\nI0312 22:05:16.690799 2881 log.go:172] (0xc000a351e0) Data frame received for 3\nI0312 22:05:16.690818 2881 log.go:172] (0xc00094c3c0) (3) Data frame handling\nI0312 22:05:16.690825 2881 log.go:172] (0xc00094c3c0) (3) Data frame sent\nI0312 22:05:16.690829 2881 log.go:172] (0xc000a351e0) Data frame received for 3\nI0312 22:05:16.690833 2881 log.go:172] (0xc00094c3c0) (3) Data frame handling\nI0312 22:05:16.690850 2881 log.go:172] (0xc000a351e0) Data frame received for 5\nI0312 22:05:16.690857 2881 log.go:172] (0xc0009540a0) (5) Data frame handling\nI0312 22:05:16.690864 2881 log.go:172] (0xc0009540a0) (5) Data frame sent\nI0312 22:05:16.690868 2881 log.go:172] (0xc000a351e0) Data frame received for 5\nI0312 22:05:16.690872 2881 log.go:172] (0xc0009540a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 22:05:16.691732 2881 log.go:172] (0xc000a351e0) Data frame received for 1\nI0312 22:05:16.691740 2881 log.go:172] (0xc000b04780) (1) Data frame handling\nI0312 22:05:16.691745 2881 log.go:172] (0xc000b04780) (1) Data frame sent\nI0312 22:05:16.691753 2881 log.go:172] (0xc000a351e0) (0xc000b04780) Stream removed, broadcasting: 1\nI0312 22:05:16.691764 2881 log.go:172] (0xc000a351e0) Go away received\nI0312 22:05:16.692032 2881 log.go:172] (0xc000a351e0) (0xc000b04780) Stream removed, broadcasting: 1\nI0312 22:05:16.692048 2881 log.go:172] (0xc000a351e0) (0xc00094c3c0) Stream removed, broadcasting: 3\nI0312 22:05:16.692053 2881 log.go:172] (0xc000a351e0) (0xc0009540a0) Stream removed, broadcasting: 5\n" Mar 12 22:05:16.694: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 22:05:16.694: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 22:05:26.773: INFO: Waiting for StatefulSet statefulset-9712/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 22:05:36.780: INFO: Deleting all statefulset in ns statefulset-9712 Mar 12 22:05:36.783: INFO: Scaling statefulset ss2 to 0 Mar 12 22:05:46.800: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 22:05:46.803: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:05:46.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9712" for this suite. • [SLOW TEST:101.318 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":202,"skipped":3345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:05:46.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 12 22:05:46.884: INFO: namespace kubectl-9889 Mar 12 22:05:46.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9889' Mar 12 22:05:47.154: INFO: stderr: "" Mar 12 22:05:47.154: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 22:05:48.158: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 22:05:48.158: INFO: Found 0 / 1 Mar 12 22:05:49.157: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 22:05:49.158: INFO: Found 1 / 1 Mar 12 22:05:49.158: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 22:05:49.160: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 22:05:49.160: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 22:05:49.160: INFO: wait on agnhost-master startup in kubectl-9889 Mar 12 22:05:49.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-xtqcc agnhost-master --namespace=kubectl-9889' Mar 12 22:05:49.296: INFO: stderr: "" Mar 12 22:05:49.296: INFO: stdout: "Paused\n" STEP: exposing RC Mar 12 22:05:49.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9889' Mar 12 22:05:49.395: INFO: stderr: "" Mar 12 22:05:49.395: INFO: stdout: "service/rm2 exposed\n" Mar 12 22:05:49.401: INFO: Service rm2 in namespace kubectl-9889 found. STEP: exposing service Mar 12 22:05:51.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9889' Mar 12 22:05:51.566: INFO: stderr: "" Mar 12 22:05:51.566: INFO: stdout: "service/rm3 exposed\n" Mar 12 22:05:51.582: INFO: Service rm3 in namespace kubectl-9889 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:05:53.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9889" for this suite. • [SLOW TEST:6.752 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":203,"skipped":3372,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:05:53.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 12 22:05:53.671: INFO: Waiting up to 5m0s for pod "pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868" in namespace "emptydir-5545" to be "success or failure" Mar 12 22:05:53.677: INFO: Pod "pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868": Phase="Pending", Reason="", readiness=false. Elapsed: 5.834494ms Mar 12 22:05:55.681: INFO: Pod "pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009763059s STEP: Saw pod success Mar 12 22:05:55.681: INFO: Pod "pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868" satisfied condition "success or failure" Mar 12 22:05:55.684: INFO: Trying to get logs from node jerma-worker2 pod pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868 container test-container: STEP: delete the pod Mar 12 22:05:55.715: INFO: Waiting for pod pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868 to disappear Mar 12 22:05:55.719: INFO: Pod pod-f8b94cb0-0d35-4bdc-aade-4d667e2b8868 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:05:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5545" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:05:55.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 12 22:05:55.864: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6148 /api/v1/namespaces/watch-6148/configmaps/e2e-watch-test-resource-version bfdc2107-fbb0-4fdf-bc81-e48efe61cc6b 1254222 0 2020-03-12 22:05:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 22:05:55.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6148 /api/v1/namespaces/watch-6148/configmaps/e2e-watch-test-resource-version bfdc2107-fbb0-4fdf-bc81-e48efe61cc6b 1254223 0 2020-03-12 22:05:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:05:55.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6148" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":205,"skipped":3400,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:05:55.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:02.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8949" for this suite. STEP: Destroying namespace "nsdeletetest-2363" for this suite. Mar 12 22:06:02.132: INFO: Namespace nsdeletetest-2363 was already deleted STEP: Destroying namespace "nsdeletetest-2250" for this suite. • [SLOW TEST:6.261 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":206,"skipped":3400,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:02.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 22:06:02.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6027' Mar 12 22:06:02.276: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 12 22:06:02.276: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Mar 12 22:06:04.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6027' Mar 12 22:06:04.408: INFO: stderr: "" Mar 12 22:06:04.408: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:04.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6027" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":207,"skipped":3419,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:04.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:06:04.480: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 12 22:06:09.496: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 22:06:09.496: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 12 22:06:11.500: INFO: Creating deployment "test-rollover-deployment" Mar 12 22:06:11.512: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 12 22:06:13.525: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 12 22:06:13.529: INFO: Ensure that both replica sets have 1 created replica Mar 12 22:06:13.533: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 12 22:06:13.538: INFO: Updating deployment test-rollover-deployment Mar 12 22:06:13.538: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 12 22:06:15.580: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 12 22:06:15.587: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 12 22:06:15.593: INFO: all replica sets need to contain the pod-template-hash label Mar 12 22:06:15.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647575, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:17.598: INFO: all replica sets need to contain the pod-template-hash label Mar 12 22:06:17.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647575, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:19.600: INFO: all replica sets need to contain the pod-template-hash label Mar 12 22:06:19.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647575, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:21.601: INFO: all replica sets need to contain the pod-template-hash label Mar 12 22:06:21.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647575, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:23.599: INFO: all replica sets need to contain the pod-template-hash label Mar 12 22:06:23.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647575, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:25.613: INFO: Mar 12 22:06:25.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647585, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647571, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 22:06:27.599: INFO: Mar 12 22:06:27.599: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 22:06:27.607: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4477 /apis/apps/v1/namespaces/deployment-4477/deployments/test-rollover-deployment 372149c4-74ff-4f4b-a9b5-099b25072834 1254502 2 2020-03-12 22:06:11 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056230b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 22:06:11 +0000 UTC,LastTransitionTime:2020-03-12 22:06:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-12 22:06:25 +0000 UTC,LastTransitionTime:2020-03-12 22:06:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 22:06:27.610: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4477 /apis/apps/v1/namespaces/deployment-4477/replicasets/test-rollover-deployment-574d6dfbff 73f42e41-2e4d-404b-9208-73a504041632 1254492 2 2020-03-12 22:06:13 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 372149c4-74ff-4f4b-a9b5-099b25072834 0xc005623527 0xc005623528}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005623598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:06:27.610: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 12 22:06:27.610: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4477 /apis/apps/v1/namespaces/deployment-4477/replicasets/test-rollover-controller cb1c16b9-6988-4107-b1a6-a56a099a740b 1254501 2 2020-03-12 22:06:04 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 372149c4-74ff-4f4b-a9b5-099b25072834 0xc00562343f 0xc005623450}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0056234b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:06:27.610: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4477 /apis/apps/v1/namespaces/deployment-4477/replicasets/test-rollover-deployment-f6c94f66c e65d7eb8-985c-4c10-b44e-e8d1abc51f7c 1254446 2 2020-03-12 22:06:11 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 372149c4-74ff-4f4b-a9b5-099b25072834 0xc005623600 0xc005623601}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005623688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:06:27.613: INFO: Pod "test-rollover-deployment-574d6dfbff-srqv7" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-srqv7 test-rollover-deployment-574d6dfbff- deployment-4477 /api/v1/namespaces/deployment-4477/pods/test-rollover-deployment-574d6dfbff-srqv7 38852349-2e9b-464b-bb87-09090f85dffe 1254457 0 2020-03-12 22:06:13 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 73f42e41-2e4d-404b-9208-73a504041632 0xc005623c07 0xc005623c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mj5wc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mj5wc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mj5wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:06:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:06:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.252,StartTime:2020-03-12 22:06:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 22:06:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://036ec3e8748512d9160eaf942647fb61ced5130f9d8bb66ee37b402fc46d37a0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:27.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4477" for this suite. • [SLOW TEST:23.207 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":208,"skipped":3421,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:27.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:06:27.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3" in namespace "downward-api-9011" to be "success or failure" Mar 12 22:06:27.674: INFO: Pod "downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254849ms Mar 12 22:06:29.677: INFO: Pod "downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0075395s Mar 12 22:06:31.681: INFO: Pod "downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011200254s STEP: Saw pod success Mar 12 22:06:31.681: INFO: Pod "downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3" satisfied condition "success or failure" Mar 12 22:06:31.684: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3 container client-container: STEP: delete the pod Mar 12 22:06:31.725: INFO: Waiting for pod downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3 to disappear Mar 12 22:06:31.733: INFO: Pod downwardapi-volume-d302530b-06ae-4c75-935a-c6cd49e6bed3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:31.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9011" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:31.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:06:31.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 12 22:06:31.892: INFO: stderr: "" Mar 12 22:06:31.892: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:31.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":210,"skipped":3546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:31.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 12 22:06:31.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5074' Mar 12 22:06:32.347: INFO: stderr: "" Mar 12 22:06:32.347: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:06:32.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5074' Mar 12 22:06:32.681: INFO: stderr: "" Mar 12 22:06:32.681: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Mar 12 22:06:37.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5074' Mar 12 22:06:37.801: INFO: stderr: "" Mar 12 22:06:37.801: INFO: stdout: "update-demo-nautilus-h8kh2 update-demo-nautilus-rgfl8 " Mar 12 22:06:37.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8kh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5074' Mar 12 22:06:37.892: INFO: stderr: "" Mar 12 22:06:37.892: INFO: stdout: "true" Mar 12 22:06:37.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8kh2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5074' Mar 12 22:06:37.968: INFO: stderr: "" Mar 12 22:06:37.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:06:37.968: INFO: validating pod update-demo-nautilus-h8kh2 Mar 12 22:06:37.971: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:06:37.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:06:37.971: INFO: update-demo-nautilus-h8kh2 is verified up and running Mar 12 22:06:37.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgfl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5074' Mar 12 22:06:38.048: INFO: stderr: "" Mar 12 22:06:38.048: INFO: stdout: "true" Mar 12 22:06:38.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgfl8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5074' Mar 12 22:06:38.115: INFO: stderr: "" Mar 12 22:06:38.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:06:38.115: INFO: validating pod update-demo-nautilus-rgfl8 Mar 12 22:06:38.118: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:06:38.118: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:06:38.118: INFO: update-demo-nautilus-rgfl8 is verified up and running STEP: using delete to clean up resources Mar 12 22:06:38.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5074' Mar 12 22:06:38.196: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 22:06:38.196: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 22:06:38.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5074' Mar 12 22:06:38.285: INFO: stderr: "No resources found in kubectl-5074 namespace.\n" Mar 12 22:06:38.285: INFO: stdout: "" Mar 12 22:06:38.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5074 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 22:06:38.346: INFO: stderr: "" Mar 12 22:06:38.346: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:38.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5074" for this suite. • [SLOW TEST:6.449 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":211,"skipped":3589,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 22:06:38.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5314' Mar 12 22:06:38.514: INFO: stderr: "" Mar 12 22:06:38.515: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Mar 12 22:06:38.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5314' Mar 12 22:06:46.035: INFO: stderr: "" Mar 12 22:06:46.035: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:46.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5314" for this suite. • [SLOW TEST:7.707 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":212,"skipped":3599,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:46.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:06:46.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0" in namespace "projected-3099" to be "success or failure" Mar 12 22:06:46.145: INFO: Pod "downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.350854ms Mar 12 22:06:48.148: INFO: Pod "downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022039192s STEP: Saw pod success Mar 12 22:06:48.149: INFO: Pod "downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0" satisfied condition "success or failure" Mar 12 22:06:48.151: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0 container client-container: STEP: delete the pod Mar 12 22:06:48.173: INFO: Waiting for pod downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0 to disappear Mar 12 22:06:48.183: INFO: Pod downwardapi-volume-5048f130-d32c-4f0c-9909-0e8618013cc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:48.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3099" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3611,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:48.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 12 22:06:48.304: INFO: Waiting up to 5m0s for pod "pod-888343f9-5f56-46e9-9010-d08b1539b962" in namespace "emptydir-1869" to be "success or failure" Mar 12 22:06:48.334: INFO: Pod "pod-888343f9-5f56-46e9-9010-d08b1539b962": Phase="Pending", Reason="", readiness=false. Elapsed: 30.608908ms Mar 12 22:06:50.337: INFO: Pod "pod-888343f9-5f56-46e9-9010-d08b1539b962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033742637s STEP: Saw pod success Mar 12 22:06:50.337: INFO: Pod "pod-888343f9-5f56-46e9-9010-d08b1539b962" satisfied condition "success or failure" Mar 12 22:06:50.340: INFO: Trying to get logs from node jerma-worker2 pod pod-888343f9-5f56-46e9-9010-d08b1539b962 container test-container: STEP: delete the pod Mar 12 22:06:50.365: INFO: Waiting for pod pod-888343f9-5f56-46e9-9010-d08b1539b962 to disappear Mar 12 22:06:50.369: INFO: Pod pod-888343f9-5f56-46e9-9010-d08b1539b962 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:06:50.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1869" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3618,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:06:50.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:06:50.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 12 22:06:51.083: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:06:51Z generation:1 name:name1 resourceVersion:1254756 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a9e3d82c-1757-432e-ae36-2cfc6f50f4d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 12 22:07:01.088: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:07:01Z generation:1 name:name2 resourceVersion:1254807 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6c0cd656-85ce-4b33-a689-0999585f786d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 12 22:07:11.094: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:06:51Z generation:2 name:name1 resourceVersion:1254837 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a9e3d82c-1757-432e-ae36-2cfc6f50f4d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 12 22:07:21.099: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:07:01Z generation:2 name:name2 resourceVersion:1254867 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6c0cd656-85ce-4b33-a689-0999585f786d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 12 22:07:31.105: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:06:51Z generation:2 name:name1 resourceVersion:1254897 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a9e3d82c-1757-432e-ae36-2cfc6f50f4d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 12 22:07:41.112: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-12T22:07:01Z generation:2 name:name2 resourceVersion:1254927 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6c0cd656-85ce-4b33-a689-0999585f786d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:07:51.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8965" for this suite. • [SLOW TEST:61.257 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":215,"skipped":3629,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:07:51.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 12 22:07:55.742: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:07:55.745: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:07:57.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:07:57.773: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:07:59.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:07:59.749: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:08:01.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:08:01.749: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:08:03.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:08:03.749: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:08:05.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:08:05.749: INFO: Pod pod-with-poststart-http-hook still exists Mar 12 22:08:07.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 12 22:08:07.750: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:08:07.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5829" for this suite. • [SLOW TEST:16.126 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3644,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:08:07.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:08:07.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381" in namespace "projected-4512" to be "success or failure" Mar 12 22:08:07.856: INFO: Pod "downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381": Phase="Pending", Reason="", readiness=false. Elapsed: 35.2629ms Mar 12 22:08:09.860: INFO: Pod "downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039342481s Mar 12 22:08:11.864: INFO: Pod "downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043445037s STEP: Saw pod success Mar 12 22:08:11.864: INFO: Pod "downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381" satisfied condition "success or failure" Mar 12 22:08:11.867: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381 container client-container: STEP: delete the pod Mar 12 22:08:11.905: INFO: Waiting for pod downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381 to disappear Mar 12 22:08:11.909: INFO: Pod downwardapi-volume-63c5d9f8-b9d9-4893-96f9-083e01ef0381 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:08:11.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4512" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3650,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:08:11.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 12 22:08:11.996: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 22:08:12.006: INFO: Waiting for terminating namespaces to be deleted... Mar 12 22:08:12.009: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 12 22:08:12.014: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 22:08:12.014: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 22:08:12.014: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 22:08:12.014: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 22:08:12.014: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 12 22:08:12.019: INFO: pod-handle-http-request from container-lifecycle-hook-5829 started at 2020-03-12 22:07:51 +0000 UTC (1 container statuses recorded) Mar 12 22:08:12.019: INFO: Container pod-handle-http-request ready: true, restart count 0 Mar 12 22:08:12.019: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 22:08:12.019: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 22:08:12.019: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded) Mar 12 22:08:12.019: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3dc64cbc-59da-4da9-9f5a-aa271e1698ed 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-3dc64cbc-59da-4da9-9f5a-aa271e1698ed off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3dc64cbc-59da-4da9-9f5a-aa271e1698ed [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:08:20.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9869" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.320 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":218,"skipped":3657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:08:20.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c17c9daa-3bef-4819-b95c-f3d685e332fe STEP: Creating secret with name s-test-opt-upd-a92fcf15-571c-4815-b860-046ff580f09c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c17c9daa-3bef-4819-b95c-f3d685e332fe STEP: Updating secret s-test-opt-upd-a92fcf15-571c-4815-b860-046ff580f09c STEP: Creating secret with name s-test-opt-create-dc89bd9c-d189-4423-9c31-520ba9ae2fef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:09:48.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6396" for this suite. • [SLOW TEST:88.479 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3694,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:09:48.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 12 22:09:48.789: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:03.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2534" for this suite. • [SLOW TEST:14.715 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":220,"skipped":3699,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:03.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 12 22:10:05.515: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:05.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5707" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3701,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:05.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 12 22:10:05.656: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255579 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 22:10:05.656: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255580 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 12 22:10:05.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255581 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 12 22:10:15.687: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255628 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 22:10:15.687: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255629 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 12 22:10:15.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7704 /api/v1/namespaces/watch-7704/configmaps/e2e-watch-test-label-changed 8e4e83c0-86c7-413b-a987-6d9a9dedeacb 1255630 0 2020-03-12 22:10:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:15.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7704" for this suite. • [SLOW TEST:10.099 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":222,"skipped":3710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:15.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1688 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1688 I0312 22:10:15.831888 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1688, replica count: 2 I0312 22:10:18.882330 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 22:10:18.882: INFO: Creating new exec pod Mar 12 22:10:21.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1688 execpod8xgbq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 12 22:10:23.805: INFO: stderr: "I0312 22:10:23.739678 3272 log.go:172] (0xc000105760) (0xc0006a7e00) Create stream\nI0312 22:10:23.739707 3272 log.go:172] (0xc000105760) (0xc0006a7e00) Stream added, broadcasting: 1\nI0312 22:10:23.742012 3272 log.go:172] (0xc000105760) Reply frame received for 1\nI0312 22:10:23.742046 3272 log.go:172] (0xc000105760) (0xc000664640) Create stream\nI0312 22:10:23.742057 3272 log.go:172] (0xc000105760) (0xc000664640) Stream added, broadcasting: 3\nI0312 22:10:23.743000 3272 log.go:172] (0xc000105760) Reply frame received for 3\nI0312 22:10:23.743045 3272 log.go:172] (0xc000105760) (0xc0004e3400) Create stream\nI0312 22:10:23.743059 3272 log.go:172] (0xc000105760) (0xc0004e3400) Stream added, broadcasting: 5\nI0312 22:10:23.744232 3272 log.go:172] (0xc000105760) Reply frame received for 5\nI0312 22:10:23.796806 3272 log.go:172] (0xc000105760) Data frame received for 5\nI0312 22:10:23.796844 3272 log.go:172] (0xc0004e3400) (5) Data frame handling\nI0312 22:10:23.796866 3272 log.go:172] (0xc0004e3400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0312 22:10:23.798711 3272 log.go:172] (0xc000105760) Data frame received for 5\nI0312 22:10:23.798739 3272 log.go:172] (0xc0004e3400) (5) Data frame handling\nI0312 22:10:23.798760 3272 log.go:172] (0xc0004e3400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0312 22:10:23.799213 3272 log.go:172] (0xc000105760) Data frame received for 3\nI0312 22:10:23.799236 3272 log.go:172] (0xc000664640) (3) Data frame handling\nI0312 22:10:23.799327 3272 log.go:172] (0xc000105760) Data frame received for 5\nI0312 22:10:23.799343 3272 log.go:172] (0xc0004e3400) (5) Data frame handling\nI0312 22:10:23.800959 3272 log.go:172] (0xc000105760) Data frame received for 1\nI0312 22:10:23.800975 3272 log.go:172] (0xc0006a7e00) (1) Data frame handling\nI0312 22:10:23.800983 3272 log.go:172] (0xc0006a7e00) (1) Data frame sent\nI0312 22:10:23.801177 3272 log.go:172] (0xc000105760) (0xc0006a7e00) Stream removed, broadcasting: 1\nI0312 22:10:23.801215 3272 log.go:172] (0xc000105760) Go away received\nI0312 22:10:23.801505 3272 log.go:172] (0xc000105760) (0xc0006a7e00) Stream removed, broadcasting: 1\nI0312 22:10:23.801521 3272 log.go:172] (0xc000105760) (0xc000664640) Stream removed, broadcasting: 3\nI0312 22:10:23.801529 3272 log.go:172] (0xc000105760) (0xc0004e3400) Stream removed, broadcasting: 5\n" Mar 12 22:10:23.805: INFO: stdout: "" Mar 12 22:10:23.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1688 execpod8xgbq -- /bin/sh -x -c nc -zv -t -w 2 10.100.232.44 80' Mar 12 22:10:23.976: INFO: stderr: "I0312 22:10:23.906792 3304 log.go:172] (0xc0000f62c0) (0xc000727c20) Create stream\nI0312 22:10:23.906828 3304 log.go:172] (0xc0000f62c0) (0xc000727c20) Stream added, broadcasting: 1\nI0312 22:10:23.908868 3304 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0312 22:10:23.908887 3304 log.go:172] (0xc0000f62c0) (0xc0008fa000) Create stream\nI0312 22:10:23.908893 3304 log.go:172] (0xc0000f62c0) (0xc0008fa000) Stream added, broadcasting: 3\nI0312 22:10:23.909507 3304 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0312 22:10:23.909533 3304 log.go:172] (0xc0000f62c0) (0xc0005bc000) Create stream\nI0312 22:10:23.909542 3304 log.go:172] (0xc0000f62c0) (0xc0005bc000) Stream added, broadcasting: 5\nI0312 22:10:23.910169 3304 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0312 22:10:23.972652 3304 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0312 22:10:23.972672 3304 log.go:172] (0xc0008fa000) (3) Data frame handling\nI0312 22:10:23.972701 3304 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0312 22:10:23.972728 3304 log.go:172] (0xc0005bc000) (5) Data frame handling\nI0312 22:10:23.972743 3304 log.go:172] (0xc0005bc000) (5) Data frame sent\nI0312 22:10:23.972751 3304 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0312 22:10:23.972759 3304 log.go:172] (0xc0005bc000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.232.44 80\nConnection to 10.100.232.44 80 port [tcp/http] succeeded!\nI0312 22:10:23.973806 3304 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0312 22:10:23.973824 3304 log.go:172] (0xc000727c20) (1) Data frame handling\nI0312 22:10:23.973833 3304 log.go:172] (0xc000727c20) (1) Data frame sent\nI0312 22:10:23.973849 3304 log.go:172] (0xc0000f62c0) (0xc000727c20) Stream removed, broadcasting: 1\nI0312 22:10:23.973864 3304 log.go:172] (0xc0000f62c0) Go away received\nI0312 22:10:23.974281 3304 log.go:172] (0xc0000f62c0) (0xc000727c20) Stream removed, broadcasting: 1\nI0312 22:10:23.974300 3304 log.go:172] (0xc0000f62c0) (0xc0008fa000) Stream removed, broadcasting: 3\nI0312 22:10:23.974309 3304 log.go:172] (0xc0000f62c0) (0xc0005bc000) Stream removed, broadcasting: 5\n" Mar 12 22:10:23.976: INFO: stdout: "" Mar 12 22:10:23.977: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:24.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1688" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.316 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":223,"skipped":3754,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:24.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 22:10:24.072: INFO: Waiting up to 5m0s for pod "pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4" in namespace "emptydir-5611" to be "success or failure" Mar 12 22:10:24.075: INFO: Pod "pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080362ms Mar 12 22:10:26.078: INFO: Pod "pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005983467s Mar 12 22:10:28.090: INFO: Pod "pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01708982s STEP: Saw pod success Mar 12 22:10:28.090: INFO: Pod "pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4" satisfied condition "success or failure" Mar 12 22:10:28.095: INFO: Trying to get logs from node jerma-worker pod pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4 container test-container: STEP: delete the pod Mar 12 22:10:28.128: INFO: Waiting for pod pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4 to disappear Mar 12 22:10:28.168: INFO: Pod pod-0104c4d7-b647-4bb8-bb4d-52c46969dfe4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:28.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5611" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:28.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 12 22:10:28.261: INFO: Waiting up to 5m0s for pod "pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6" in namespace "emptydir-751" to be "success or failure" Mar 12 22:10:28.294: INFO: Pod "pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.524676ms Mar 12 22:10:30.298: INFO: Pod "pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037220046s Mar 12 22:10:32.301: INFO: Pod "pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040037482s STEP: Saw pod success Mar 12 22:10:32.301: INFO: Pod "pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6" satisfied condition "success or failure" Mar 12 22:10:32.302: INFO: Trying to get logs from node jerma-worker pod pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6 container test-container: STEP: delete the pod Mar 12 22:10:32.335: INFO: Waiting for pod pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6 to disappear Mar 12 22:10:32.343: INFO: Pod pod-1df11f1f-954f-4b88-83cd-fc6af41aeeb6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:32.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-751" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3795,"failed":0} ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:32.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:38.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2804" for this suite. • [SLOW TEST:6.094 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":226,"skipped":3795,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:38.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 22:10:39.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 22:10:41.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647839, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647839, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647839, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719647838, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 22:10:44.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:10:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3728" for this suite. STEP: Destroying namespace "webhook-3728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.903 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":227,"skipped":3796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:10:56.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:11:01.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4758" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":228,"skipped":3819,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:11:01.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-85cef707-343e-4c34-a62d-582cda67d389 STEP: Creating a pod to test consume configMaps Mar 12 22:11:01.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35" in namespace "configmap-8713" to be "success or failure" Mar 12 22:11:01.410: INFO: Pod "pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35": Phase="Pending", Reason="", readiness=false. Elapsed: 23.461556ms Mar 12 22:11:03.414: INFO: Pod "pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027504716s STEP: Saw pod success Mar 12 22:11:03.414: INFO: Pod "pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35" satisfied condition "success or failure" Mar 12 22:11:03.417: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35 container configmap-volume-test: STEP: delete the pod Mar 12 22:11:03.435: INFO: Waiting for pod pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35 to disappear Mar 12 22:11:03.482: INFO: Pod pod-configmaps-92a3c6e3-8ff9-4e28-9ae3-f85f5386ea35 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:11:03.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8713" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3829,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:11:03.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-dd061a13-e2b4-4269-b014-a86cf9a2f1c7 in namespace container-probe-6235 Mar 12 22:11:05.552: INFO: Started pod liveness-dd061a13-e2b4-4269-b014-a86cf9a2f1c7 in namespace container-probe-6235 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 22:11:05.555: INFO: Initial restart count of pod liveness-dd061a13-e2b4-4269-b014-a86cf9a2f1c7 is 0 Mar 12 22:11:29.606: INFO: Restart count of pod container-probe-6235/liveness-dd061a13-e2b4-4269-b014-a86cf9a2f1c7 is now 1 (24.050548987s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:11:29.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6235" for this suite. • [SLOW TEST:26.155 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:11:29.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:12:00.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4403" for this suite. STEP: Destroying namespace "nsdeletetest-1146" for this suite. Mar 12 22:12:00.877: INFO: Namespace nsdeletetest-1146 was already deleted STEP: Destroying namespace "nsdeletetest-1751" for this suite. • [SLOW TEST:31.233 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":231,"skipped":3881,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:12:00.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 12 22:12:00.952: INFO: Waiting up to 5m0s for pod "var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756" in namespace "var-expansion-8544" to be "success or failure" Mar 12 22:12:00.955: INFO: Pod "var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756": Phase="Pending", Reason="", readiness=false. Elapsed: 3.199353ms Mar 12 22:12:02.959: INFO: Pod "var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00666567s STEP: Saw pod success Mar 12 22:12:02.959: INFO: Pod "var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756" satisfied condition "success or failure" Mar 12 22:12:02.961: INFO: Trying to get logs from node jerma-worker pod var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756 container dapi-container: STEP: delete the pod Mar 12 22:12:03.026: INFO: Waiting for pod var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756 to disappear Mar 12 22:12:03.033: INFO: Pod var-expansion-483572a8-1f1f-4da9-bd4c-c785ed99c756 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:12:03.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8544" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3881,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:12:03.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:12:03.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6" in namespace "downward-api-979" to be "success or failure" Mar 12 22:12:03.088: INFO: Pod "downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.938808ms Mar 12 22:12:05.093: INFO: Pod "downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00812424s Mar 12 22:12:07.097: INFO: Pod "downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011760772s STEP: Saw pod success Mar 12 22:12:07.097: INFO: Pod "downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6" satisfied condition "success or failure" Mar 12 22:12:07.099: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6 container client-container: STEP: delete the pod Mar 12 22:12:07.134: INFO: Waiting for pod downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6 to disappear Mar 12 22:12:07.142: INFO: Pod downwardapi-volume-f2839c25-3d62-4f41-9633-8b231c57baf6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:12:07.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-979" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3891,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:12:07.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9527 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 12 22:12:07.232: INFO: Found 0 stateful pods, waiting for 3 Mar 12 22:12:17.236: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:12:17.236: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:12:17.236: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 22:12:17.257: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 12 22:12:27.294: INFO: Updating stateful set ss2 Mar 12 22:12:27.305: INFO: Waiting for Pod statefulset-9527/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 12 22:12:37.425: INFO: Found 2 stateful pods, waiting for 3 Mar 12 22:12:47.430: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:12:47.430: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 22:12:47.431: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 12 22:12:47.452: INFO: Updating stateful set ss2 Mar 12 22:12:47.457: INFO: Waiting for Pod statefulset-9527/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 22:12:57.499: INFO: Updating stateful set ss2 Mar 12 22:12:57.509: INFO: Waiting for StatefulSet statefulset-9527/ss2 to complete update Mar 12 22:12:57.509: INFO: Waiting for Pod statefulset-9527/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 22:13:07.514: INFO: Waiting for StatefulSet statefulset-9527/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 12 22:13:17.514: INFO: Deleting all statefulset in ns statefulset-9527 Mar 12 22:13:17.516: INFO: Scaling statefulset ss2 to 0 Mar 12 22:13:47.531: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 22:13:47.533: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:13:47.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9527" for this suite. • [SLOW TEST:100.403 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":234,"skipped":3904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:13:47.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:14:05.624: INFO: Container started at 2020-03-12 22:13:48 +0000 UTC, pod became ready at 2020-03-12 22:14:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:14:05.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6332" for this suite. • [SLOW TEST:18.078 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3937,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:14:05.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:14:07.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5097" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3945,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:14:07.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1003.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.118_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1003.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1003.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1003.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1003.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.118_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 22:14:11.985: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:11.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:11.989: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:11.992: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:12.012: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:12.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:12.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:12.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:12.032: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:17.036: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.039: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.042: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.067: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.070: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.072: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:17.090: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:22.035: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.051: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.054: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:22.066: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:27.036: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.038: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.040: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.059: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.063: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.065: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:27.077: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:32.038: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.056: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.184: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.191: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:32.204: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:37.036: INFO: Unable to read wheezy_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.039: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.041: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.044: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.063: INFO: Unable to read jessie_udp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.065: INFO: Unable to read jessie_tcp@dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.089: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local from pod dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692: the server could not find the requested resource (get pods dns-test-485e06eb-22f4-429c-a3f1-c598999a5692) Mar 12 22:14:37.103: INFO: Lookups using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 failed for: [wheezy_udp@dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@dns-test-service.dns-1003.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_udp@dns-test-service.dns-1003.svc.cluster.local jessie_tcp@dns-test-service.dns-1003.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1003.svc.cluster.local] Mar 12 22:14:42.126: INFO: DNS probes using dns-1003/dns-test-485e06eb-22f4-429c-a3f1-c598999a5692 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:14:42.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1003" for this suite. • [SLOW TEST:34.597 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":237,"skipped":3950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:14:42.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 12 22:14:42.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9608' Mar 12 22:14:42.705: INFO: stderr: "" Mar 12 22:14:42.705: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:14:42.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9608' Mar 12 22:14:42.821: INFO: stderr: "" Mar 12 22:14:42.821: INFO: stdout: "update-demo-nautilus-7hk9t update-demo-nautilus-zxb2q " Mar 12 22:14:42.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7hk9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:14:42.893: INFO: stderr: "" Mar 12 22:14:42.893: INFO: stdout: "" Mar 12 22:14:42.893: INFO: update-demo-nautilus-7hk9t is created but not running Mar 12 22:14:47.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9608' Mar 12 22:14:48.005: INFO: stderr: "" Mar 12 22:14:48.005: INFO: stdout: "update-demo-nautilus-7hk9t update-demo-nautilus-zxb2q " Mar 12 22:14:48.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7hk9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:14:48.118: INFO: stderr: "" Mar 12 22:14:48.118: INFO: stdout: "true" Mar 12 22:14:48.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7hk9t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:14:48.199: INFO: stderr: "" Mar 12 22:14:48.199: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:14:48.199: INFO: validating pod update-demo-nautilus-7hk9t Mar 12 22:14:48.201: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:14:48.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:14:48.201: INFO: update-demo-nautilus-7hk9t is verified up and running Mar 12 22:14:48.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxb2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:14:48.266: INFO: stderr: "" Mar 12 22:14:48.266: INFO: stdout: "true" Mar 12 22:14:48.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxb2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:14:48.325: INFO: stderr: "" Mar 12 22:14:48.325: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:14:48.325: INFO: validating pod update-demo-nautilus-zxb2q Mar 12 22:14:48.328: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:14:48.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:14:48.328: INFO: update-demo-nautilus-zxb2q is verified up and running STEP: rolling-update to new replication controller Mar 12 22:14:48.329: INFO: scanned /root for discovery docs: Mar 12 22:14:48.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9608' Mar 12 22:15:10.841: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 12 22:15:10.841: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:15:10.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9608' Mar 12 22:15:10.939: INFO: stderr: "" Mar 12 22:15:10.939: INFO: stdout: "update-demo-kitten-snw8n update-demo-kitten-tzgpq " Mar 12 22:15:10.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snw8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:15:11.018: INFO: stderr: "" Mar 12 22:15:11.018: INFO: stdout: "true" Mar 12 22:15:11.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snw8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:15:11.091: INFO: stderr: "" Mar 12 22:15:11.091: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 22:15:11.091: INFO: validating pod update-demo-kitten-snw8n Mar 12 22:15:11.094: INFO: got data: { "image": "kitten.jpg" } Mar 12 22:15:11.094: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 22:15:11.094: INFO: update-demo-kitten-snw8n is verified up and running Mar 12 22:15:11.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzgpq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:15:11.159: INFO: stderr: "" Mar 12 22:15:11.159: INFO: stdout: "true" Mar 12 22:15:11.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzgpq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9608' Mar 12 22:15:11.232: INFO: stderr: "" Mar 12 22:15:11.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 12 22:15:11.232: INFO: validating pod update-demo-kitten-tzgpq Mar 12 22:15:11.235: INFO: got data: { "image": "kitten.jpg" } Mar 12 22:15:11.235: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 12 22:15:11.235: INFO: update-demo-kitten-tzgpq is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:11.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9608" for this suite. • [SLOW TEST:28.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":238,"skipped":4009,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:11.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3d96ee6a-7ef2-4db1-b54e-63f5d30cc2e6 STEP: Creating a pod to test consume configMaps Mar 12 22:15:11.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace" in namespace "projected-8785" to be "success or failure" Mar 12 22:15:11.315: INFO: Pod "pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825669ms Mar 12 22:15:13.318: INFO: Pod "pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008182165s Mar 12 22:15:15.321: INFO: Pod "pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011043559s STEP: Saw pod success Mar 12 22:15:15.321: INFO: Pod "pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace" satisfied condition "success or failure" Mar 12 22:15:15.323: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace container projected-configmap-volume-test: STEP: delete the pod Mar 12 22:15:15.362: INFO: Waiting for pod pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace to disappear Mar 12 22:15:15.369: INFO: Pod pod-projected-configmaps-9f9f840c-9221-4505-a0a6-4dc06bb51ace no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:15.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8785" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:15.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 12 22:15:15.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-46' Mar 12 22:15:15.619: INFO: stderr: "" Mar 12 22:15:15.619: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:15:15.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:15.744: INFO: stderr: "" Mar 12 22:15:15.744: INFO: stdout: "update-demo-nautilus-765lc update-demo-nautilus-k74nf " Mar 12 22:15:15.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-765lc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:15.828: INFO: stderr: "" Mar 12 22:15:15.828: INFO: stdout: "" Mar 12 22:15:15.828: INFO: update-demo-nautilus-765lc is created but not running Mar 12 22:15:20.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:20.936: INFO: stderr: "" Mar 12 22:15:20.936: INFO: stdout: "update-demo-nautilus-765lc update-demo-nautilus-k74nf " Mar 12 22:15:20.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-765lc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:21.031: INFO: stderr: "" Mar 12 22:15:21.031: INFO: stdout: "true" Mar 12 22:15:21.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-765lc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:21.168: INFO: stderr: "" Mar 12 22:15:21.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:21.168: INFO: validating pod update-demo-nautilus-765lc Mar 12 22:15:21.172: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:21.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:21.172: INFO: update-demo-nautilus-765lc is verified up and running Mar 12 22:15:21.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:21.264: INFO: stderr: "" Mar 12 22:15:21.264: INFO: stdout: "true" Mar 12 22:15:21.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:21.332: INFO: stderr: "" Mar 12 22:15:21.332: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:21.332: INFO: validating pod update-demo-nautilus-k74nf Mar 12 22:15:21.334: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:21.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:21.334: INFO: update-demo-nautilus-k74nf is verified up and running STEP: scaling down the replication controller Mar 12 22:15:21.336: INFO: scanned /root for discovery docs: Mar 12 22:15:21.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-46' Mar 12 22:15:22.455: INFO: stderr: "" Mar 12 22:15:22.455: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:15:22.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:22.552: INFO: stderr: "" Mar 12 22:15:22.552: INFO: stdout: "update-demo-nautilus-765lc update-demo-nautilus-k74nf " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 12 22:15:27.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:27.656: INFO: stderr: "" Mar 12 22:15:27.656: INFO: stdout: "update-demo-nautilus-k74nf " Mar 12 22:15:27.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:27.737: INFO: stderr: "" Mar 12 22:15:27.737: INFO: stdout: "true" Mar 12 22:15:27.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:27.819: INFO: stderr: "" Mar 12 22:15:27.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:27.820: INFO: validating pod update-demo-nautilus-k74nf Mar 12 22:15:27.822: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:27.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:27.823: INFO: update-demo-nautilus-k74nf is verified up and running STEP: scaling up the replication controller Mar 12 22:15:27.825: INFO: scanned /root for discovery docs: Mar 12 22:15:27.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-46' Mar 12 22:15:28.935: INFO: stderr: "" Mar 12 22:15:28.935: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 22:15:28.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:29.016: INFO: stderr: "" Mar 12 22:15:29.016: INFO: stdout: "update-demo-nautilus-k74nf update-demo-nautilus-zgjt6 " Mar 12 22:15:29.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:29.086: INFO: stderr: "" Mar 12 22:15:29.086: INFO: stdout: "true" Mar 12 22:15:29.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:29.146: INFO: stderr: "" Mar 12 22:15:29.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:29.146: INFO: validating pod update-demo-nautilus-k74nf Mar 12 22:15:29.149: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:29.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:29.149: INFO: update-demo-nautilus-k74nf is verified up and running Mar 12 22:15:29.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgjt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:29.211: INFO: stderr: "" Mar 12 22:15:29.211: INFO: stdout: "" Mar 12 22:15:29.211: INFO: update-demo-nautilus-zgjt6 is created but not running Mar 12 22:15:34.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-46' Mar 12 22:15:34.320: INFO: stderr: "" Mar 12 22:15:34.320: INFO: stdout: "update-demo-nautilus-k74nf update-demo-nautilus-zgjt6 " Mar 12 22:15:34.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:34.404: INFO: stderr: "" Mar 12 22:15:34.405: INFO: stdout: "true" Mar 12 22:15:34.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:34.471: INFO: stderr: "" Mar 12 22:15:34.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:34.471: INFO: validating pod update-demo-nautilus-k74nf Mar 12 22:15:34.473: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:34.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:34.473: INFO: update-demo-nautilus-k74nf is verified up and running Mar 12 22:15:34.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgjt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:34.535: INFO: stderr: "" Mar 12 22:15:34.535: INFO: stdout: "true" Mar 12 22:15:34.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgjt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-46' Mar 12 22:15:34.592: INFO: stderr: "" Mar 12 22:15:34.592: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 22:15:34.592: INFO: validating pod update-demo-nautilus-zgjt6 Mar 12 22:15:34.595: INFO: got data: { "image": "nautilus.jpg" } Mar 12 22:15:34.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 22:15:34.595: INFO: update-demo-nautilus-zgjt6 is verified up and running STEP: using delete to clean up resources Mar 12 22:15:34.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-46' Mar 12 22:15:34.665: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 22:15:34.665: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 22:15:34.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-46' Mar 12 22:15:34.730: INFO: stderr: "No resources found in kubectl-46 namespace.\n" Mar 12 22:15:34.730: INFO: stdout: "" Mar 12 22:15:34.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-46 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 22:15:34.790: INFO: stderr: "" Mar 12 22:15:34.790: INFO: stdout: "update-demo-nautilus-k74nf\nupdate-demo-nautilus-zgjt6\n" Mar 12 22:15:35.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-46' Mar 12 22:15:35.357: INFO: stderr: "No resources found in kubectl-46 namespace.\n" Mar 12 22:15:35.357: INFO: stdout: "" Mar 12 22:15:35.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-46 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 22:15:35.425: INFO: stderr: "" Mar 12 22:15:35.425: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:35.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-46" for this suite. • [SLOW TEST:20.055 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":240,"skipped":4039,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:35.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-535060f0-277b-4913-98d6-7550b73cc42b STEP: Creating a pod to test consume configMaps Mar 12 22:15:35.574: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b" in namespace "projected-791" to be "success or failure" Mar 12 22:15:35.579: INFO: Pod "pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.221109ms Mar 12 22:15:37.583: INFO: Pod "pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009519054s STEP: Saw pod success Mar 12 22:15:37.583: INFO: Pod "pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b" satisfied condition "success or failure" Mar 12 22:15:37.588: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b container projected-configmap-volume-test: STEP: delete the pod Mar 12 22:15:37.619: INFO: Waiting for pod pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b to disappear Mar 12 22:15:37.625: INFO: Pod pod-projected-configmaps-761b3a68-5e81-4c6d-ae91-13ff9af1ca2b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:37.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-791" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4043,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:37.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:39.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2383" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":242,"skipped":4051,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:39.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Mar 12 22:15:39.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9262' Mar 12 22:15:40.058: INFO: stderr: "" Mar 12 22:15:40.058: INFO: stdout: "pod/pause created\n" Mar 12 22:15:40.058: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 12 22:15:40.058: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9262" to be "running and ready" Mar 12 22:15:40.062: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.881668ms Mar 12 22:15:42.066: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.007727049s Mar 12 22:15:42.066: INFO: Pod "pause" satisfied condition "running and ready" Mar 12 22:15:42.066: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 12 22:15:42.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9262' Mar 12 22:15:42.155: INFO: stderr: "" Mar 12 22:15:42.155: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 12 22:15:42.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9262' Mar 12 22:15:42.237: INFO: stderr: "" Mar 12 22:15:42.237: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 12 22:15:42.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9262' Mar 12 22:15:42.307: INFO: stderr: "" Mar 12 22:15:42.307: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 12 22:15:42.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9262' Mar 12 22:15:42.372: INFO: stderr: "" Mar 12 22:15:42.372: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Mar 12 22:15:42.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9262' Mar 12 22:15:42.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 22:15:42.472: INFO: stdout: "pod \"pause\" force deleted\n" Mar 12 22:15:42.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9262' Mar 12 22:15:42.547: INFO: stderr: "No resources found in kubectl-9262 namespace.\n" Mar 12 22:15:42.547: INFO: stdout: "" Mar 12 22:15:42.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9262 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 22:15:42.609: INFO: stderr: "" Mar 12 22:15:42.609: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:42.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9262" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":243,"skipped":4062,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:42.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 22:15:45.245: INFO: Successfully updated pod "annotationupdate41c062cf-25f8-4bab-9afc-5ad54f6bf9f5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:49.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2392" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4083,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:49.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 22:15:49.388: INFO: Waiting up to 5m0s for pod "downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572" in namespace "downward-api-4748" to be "success or failure" Mar 12 22:15:49.394: INFO: Pod "downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271201ms Mar 12 22:15:51.397: INFO: Pod "downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008942728s STEP: Saw pod success Mar 12 22:15:51.397: INFO: Pod "downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572" satisfied condition "success or failure" Mar 12 22:15:51.399: INFO: Trying to get logs from node jerma-worker pod downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572 container dapi-container: STEP: delete the pod Mar 12 22:15:51.485: INFO: Waiting for pod downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572 to disappear Mar 12 22:15:51.502: INFO: Pod downward-api-68a11cab-b7d2-4c91-868e-2f12e0e0c572 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:51.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4748" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4085,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:51.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 22:15:52.125: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 22:15:55.161: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:55.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1088" for this suite. STEP: Destroying namespace "webhook-1088-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":246,"skipped":4099,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:55.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:15:55.418: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746" in namespace "projected-5693" to be "success or failure" Mar 12 22:15:55.422: INFO: Pod "downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345083ms Mar 12 22:15:57.425: INFO: Pod "downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00744233s Mar 12 22:15:59.427: INFO: Pod "downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009805907s STEP: Saw pod success Mar 12 22:15:59.427: INFO: Pod "downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746" satisfied condition "success or failure" Mar 12 22:15:59.429: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746 container client-container: STEP: delete the pod Mar 12 22:15:59.485: INFO: Waiting for pod downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746 to disappear Mar 12 22:15:59.494: INFO: Pod downwardapi-volume-d62fcf1e-1707-4ff8-a590-2712fc3f5746 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:15:59.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5693" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4108,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:15:59.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 12 22:16:01.573: INFO: &Pod{ObjectMeta:{send-events-97da8f0e-e662-4cb9-9659-b33bc39c7b63 events-3317 /api/v1/namespaces/events-3317/pods/send-events-97da8f0e-e662-4cb9-9659-b33bc39c7b63 52120ae8-d377-4654-84de-6d4a7be43ad9 1258040 0 2020-03-12 22:15:59 +0000 UTC map[name:foo time:533058629] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bpgdc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bpgdc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bpgdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:15:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:15:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.33,StartTime:2020-03-12 22:15:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 22:16:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2d4e5c15f7207f153f5fc283efa4c89686cd288c5e053af4ed82d7993e0d4a75,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 12 22:16:03.577: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 12 22:16:05.581: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:05.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3317" for this suite. • [SLOW TEST:6.094 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":248,"skipped":4118,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:05.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 12 22:16:09.681: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7921 PodName:pod-sharedvolume-087eb29e-786b-48ad-b50c-5e7dd38c1686 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:16:09.681: INFO: >>> kubeConfig: /root/.kube/config I0312 22:16:09.708251 6 log.go:172] (0xc0019478c0) (0xc001481f40) Create stream I0312 22:16:09.708280 6 log.go:172] (0xc0019478c0) (0xc001481f40) Stream added, broadcasting: 1 I0312 22:16:09.710426 6 log.go:172] (0xc0019478c0) Reply frame received for 1 I0312 22:16:09.710470 6 log.go:172] (0xc0019478c0) (0xc000ada460) Create stream I0312 22:16:09.710490 6 log.go:172] (0xc0019478c0) (0xc000ada460) Stream added, broadcasting: 3 I0312 22:16:09.711220 6 log.go:172] (0xc0019478c0) Reply frame received for 3 I0312 22:16:09.711244 6 log.go:172] (0xc0019478c0) (0xc0019fc0a0) Create stream I0312 22:16:09.711254 6 log.go:172] (0xc0019478c0) (0xc0019fc0a0) Stream added, broadcasting: 5 I0312 22:16:09.711865 6 log.go:172] (0xc0019478c0) Reply frame received for 5 I0312 22:16:09.763235 6 log.go:172] (0xc0019478c0) Data frame received for 3 I0312 22:16:09.763255 6 log.go:172] (0xc000ada460) (3) Data frame handling I0312 22:16:09.763267 6 log.go:172] (0xc000ada460) (3) Data frame sent I0312 22:16:09.763365 6 log.go:172] (0xc0019478c0) Data frame received for 3 I0312 22:16:09.763386 6 log.go:172] (0xc000ada460) (3) Data frame handling I0312 22:16:09.763408 6 log.go:172] (0xc0019478c0) Data frame received for 5 I0312 22:16:09.763425 6 log.go:172] (0xc0019fc0a0) (5) Data frame handling I0312 22:16:09.764211 6 log.go:172] (0xc0019478c0) Data frame received for 1 I0312 22:16:09.764245 6 log.go:172] (0xc001481f40) (1) Data frame handling I0312 22:16:09.764269 6 log.go:172] (0xc001481f40) (1) Data frame sent I0312 22:16:09.764293 6 log.go:172] (0xc0019478c0) (0xc001481f40) Stream removed, broadcasting: 1 I0312 22:16:09.764314 6 log.go:172] (0xc0019478c0) Go away received I0312 22:16:09.764393 6 log.go:172] (0xc0019478c0) (0xc001481f40) Stream removed, broadcasting: 1 I0312 22:16:09.764406 6 log.go:172] (0xc0019478c0) (0xc000ada460) Stream removed, broadcasting: 3 I0312 22:16:09.764415 6 log.go:172] (0xc0019478c0) (0xc0019fc0a0) Stream removed, broadcasting: 5 Mar 12 22:16:09.764: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:09.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7921" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":249,"skipped":4122,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:09.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 12 22:16:13.913: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:13.937: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:15.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:15.940: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:17.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:17.941: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:19.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:19.941: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:21.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:21.944: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:23.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:23.942: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:25.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:25.942: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 22:16:27.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 22:16:27.941: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:27.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9604" for this suite. • [SLOW TEST:18.157 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4135,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:27.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:16:28.037: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 12 22:16:33.053: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 22:16:33.054: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 22:16:35.165: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-378 /apis/apps/v1/namespaces/deployment-378/deployments/test-cleanup-deployment d551989d-2081-44d7-ab40-d6660e766db2 1258278 1 2020-03-12 22:16:33 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e16e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 22:16:33 +0000 UTC,LastTransitionTime:2020-03-12 22:16:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-12 22:16:34 +0000 UTC,LastTransitionTime:2020-03-12 22:16:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 22:16:35.167: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-378 /apis/apps/v1/namespaces/deployment-378/replicasets/test-cleanup-deployment-55ffc6b7b6 6bfaba88-25d6-4034-80c0-350c3e7ef714 1258267 1 2020-03-12 22:16:33 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d551989d-2081-44d7-ab40-d6660e766db2 0xc002e17207 0xc002e17208}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e17278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:16:35.170: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-crwkf" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-crwkf test-cleanup-deployment-55ffc6b7b6- deployment-378 /api/v1/namespaces/deployment-378/pods/test-cleanup-deployment-55ffc6b7b6-crwkf 08518bdf-0781-4454-be2e-49b68c3a7c00 1258266 0 2020-03-12 22:16:33 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 6bfaba88-25d6-4034-80c0-350c3e7ef714 0xc002e175f7 0xc002e175f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sklh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sklh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sklh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.32,StartTime:2020-03-12 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 22:16:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://96a6cf863a39f1eb583af5e77c97234c585ea57af3602ffc1920f8fbb12775ec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:35.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-378" for this suite. • [SLOW TEST:7.227 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":251,"skipped":4146,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:35.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:16:35.244: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7850" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4151,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:37.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:37.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9009" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":253,"skipped":4154,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:37.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:37.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8680" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":254,"skipped":4165,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:37.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-5ptc STEP: Creating a pod to test atomic-volume-subpath Mar 12 22:16:37.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5ptc" in namespace "subpath-5319" to be "success or failure" Mar 12 22:16:37.687: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599408ms Mar 12 22:16:39.690: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 2.007765404s Mar 12 22:16:41.694: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 4.011443222s Mar 12 22:16:43.697: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 6.015179308s Mar 12 22:16:45.701: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 8.018815733s Mar 12 22:16:47.704: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 10.021513772s Mar 12 22:16:49.708: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 12.025651096s Mar 12 22:16:51.712: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 14.029546226s Mar 12 22:16:53.715: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 16.033357752s Mar 12 22:16:55.719: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 18.037073211s Mar 12 22:16:57.723: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Running", Reason="", readiness=true. Elapsed: 20.040984173s Mar 12 22:16:59.727: INFO: Pod "pod-subpath-test-downwardapi-5ptc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.044962325s STEP: Saw pod success Mar 12 22:16:59.727: INFO: Pod "pod-subpath-test-downwardapi-5ptc" satisfied condition "success or failure" Mar 12 22:16:59.730: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-5ptc container test-container-subpath-downwardapi-5ptc: STEP: delete the pod Mar 12 22:16:59.758: INFO: Waiting for pod pod-subpath-test-downwardapi-5ptc to disappear Mar 12 22:16:59.762: INFO: Pod pod-subpath-test-downwardapi-5ptc no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5ptc Mar 12 22:16:59.762: INFO: Deleting pod "pod-subpath-test-downwardapi-5ptc" in namespace "subpath-5319" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:16:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5319" for this suite. • [SLOW TEST:22.155 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":255,"skipped":4166,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:16:59.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:16:59.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be" in namespace "projected-2287" to be "success or failure" Mar 12 22:16:59.851: INFO: Pod "downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107953ms Mar 12 22:17:01.873: INFO: Pod "downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029587859s STEP: Saw pod success Mar 12 22:17:01.873: INFO: Pod "downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be" satisfied condition "success or failure" Mar 12 22:17:01.877: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be container client-container: STEP: delete the pod Mar 12 22:17:01.907: INFO: Waiting for pod downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be to disappear Mar 12 22:17:01.915: INFO: Pod downwardapi-volume-eb6b8acf-94cd-4deb-99e7-7e7109d9b1be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:17:01.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2287" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4175,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:17:01.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 22:17:02.406: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 22:17:04.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719648222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719648222, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719648222, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719648222, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 22:17:07.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:17:07.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6941" for this suite. STEP: Destroying namespace "webhook-6941-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.691 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":257,"skipped":4185,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:17:07.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 12 22:17:07.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258554 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 22:17:07.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258554 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 12 22:17:17.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258605 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 12 22:17:17.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258605 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 12 22:17:27.669: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258637 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 22:17:27.669: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258637 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 12 22:17:37.675: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258667 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 12 22:17:37.676: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-a 97dc80cf-a10f-4408-bf0d-b199c801a605 1258667 0 2020-03-12 22:17:07 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 12 22:17:47.683: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-b 7bb34bc4-32b8-483b-a771-177a77db7e21 1258697 0 2020-03-12 22:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 22:17:47.683: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-b 7bb34bc4-32b8-483b-a771-177a77db7e21 1258697 0 2020-03-12 22:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 12 22:17:57.689: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-b 7bb34bc4-32b8-483b-a771-177a77db7e21 1258728 0 2020-03-12 22:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 12 22:17:57.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3551 /api/v1/namespaces/watch-3551/configmaps/e2e-watch-test-configmap-b 7bb34bc4-32b8-483b-a771-177a77db7e21 1258728 0 2020-03-12 22:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:07.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3551" for this suite. • [SLOW TEST:60.084 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":258,"skipped":4190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:07.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8110" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":259,"skipped":4220,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:07.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 12 22:18:10.970: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:12.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6041" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":260,"skipped":4222,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:12.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:18:12.056: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 12 22:18:12.078: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 12 22:18:17.148: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 22:18:17.148: INFO: Creating deployment "test-rolling-update-deployment" Mar 12 22:18:17.162: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 12 22:18:17.180: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 12 22:18:19.185: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 12 22:18:19.187: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 12 22:18:19.193: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2880 /apis/apps/v1/namespaces/deployment-2880/deployments/test-rolling-update-deployment 17a02990-8471-4af8-bfd6-61b5185134a7 1258899 1 2020-03-12 22:18:17 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048ef018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 22:18:17 +0000 UTC,LastTransitionTime:2020-03-12 22:18:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-12 22:18:18 +0000 UTC,LastTransitionTime:2020-03-12 22:18:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 22:18:19.220: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2880 /apis/apps/v1/namespaces/deployment-2880/replicasets/test-rolling-update-deployment-67cf4f6444 c7b5099c-d8dc-4eef-bc77-fa82ed2f2f2b 1258888 1 2020-03-12 22:18:17 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 17a02990-8471-4af8-bfd6-61b5185134a7 0xc0049d3097 0xc0049d3098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049d3118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:18:19.220: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 12 22:18:19.220: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2880 /apis/apps/v1/namespaces/deployment-2880/replicasets/test-rolling-update-controller 5a2216c8-3490-4260-ad31-61cec9deced3 1258897 2 2020-03-12 22:18:12 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 17a02990-8471-4af8-bfd6-61b5185134a7 0xc0049d2fc7 0xc0049d2fc8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0049d3028 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 22:18:19.223: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ksxpr" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ksxpr test-rolling-update-deployment-67cf4f6444- deployment-2880 /api/v1/namespaces/deployment-2880/pods/test-rolling-update-deployment-67cf4f6444-ksxpr 41e80761-ad80-443a-9009-6b35cd4cb341 1258887 0 2020-03-12 22:18:17 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 c7b5099c-d8dc-4eef-bc77-fa82ed2f2f2b 0xc0049d3597 0xc0049d3598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t8sc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t8sc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t8sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:18:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:18:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:18:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 22:18:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.37,StartTime:2020-03-12 22:18:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 22:18:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://54b3528b155bb3e4c79e3702fa057c684666b8c8a947e403837dec71b019ce8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:19.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2880" for this suite. • [SLOW TEST:7.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":261,"skipped":4229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:19.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 12 22:18:19.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c" in namespace "downward-api-9236" to be "success or failure" Mar 12 22:18:19.317: INFO: Pod "downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.808848ms Mar 12 22:18:21.320: INFO: Pod "downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034263826s STEP: Saw pod success Mar 12 22:18:21.320: INFO: Pod "downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c" satisfied condition "success or failure" Mar 12 22:18:21.323: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c container client-container: STEP: delete the pod Mar 12 22:18:21.340: INFO: Waiting for pod downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c to disappear Mar 12 22:18:21.356: INFO: Pod downwardapi-volume-f6e9f6a4-1347-403c-98b7-f7e2fe032b0c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:21.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9236" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4255,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:21.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-fd9979aa-47d5-4aa8-a19c-ea3b35beceeb STEP: Creating a pod to test consume configMaps Mar 12 22:18:21.460: INFO: Waiting up to 5m0s for pod "pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c" in namespace "configmap-9549" to be "success or failure" Mar 12 22:18:21.465: INFO: Pod "pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.654107ms Mar 12 22:18:23.469: INFO: Pod "pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009212295s STEP: Saw pod success Mar 12 22:18:23.469: INFO: Pod "pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c" satisfied condition "success or failure" Mar 12 22:18:23.472: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c container configmap-volume-test: STEP: delete the pod Mar 12 22:18:23.501: INFO: Waiting for pod pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c to disappear Mar 12 22:18:23.509: INFO: Pod pod-configmaps-c477e609-3935-482d-9c42-a64cdfbfcc2c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:23.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9549" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4264,"failed":0} ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:23.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-16c083ef-e8c5-420d-a9d3-9efdcec040bc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:23.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9425" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":264,"skipped":4264,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:23.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 12 22:18:31.725: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:31.725: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:31.746361 6 log.go:172] (0xc001d20370) (0xc0027d3b80) Create stream I0312 22:18:31.746382 6 log.go:172] (0xc001d20370) (0xc0027d3b80) Stream added, broadcasting: 1 I0312 22:18:31.747803 6 log.go:172] (0xc001d20370) Reply frame received for 1 I0312 22:18:31.747832 6 log.go:172] (0xc001d20370) (0xc0027d3cc0) Create stream I0312 22:18:31.747842 6 log.go:172] (0xc001d20370) (0xc0027d3cc0) Stream added, broadcasting: 3 I0312 22:18:31.748439 6 log.go:172] (0xc001d20370) Reply frame received for 3 I0312 22:18:31.748468 6 log.go:172] (0xc001d20370) (0xc001480000) Create stream I0312 22:18:31.748477 6 log.go:172] (0xc001d20370) (0xc001480000) Stream added, broadcasting: 5 I0312 22:18:31.749012 6 log.go:172] (0xc001d20370) Reply frame received for 5 I0312 22:18:31.815691 6 log.go:172] (0xc001d20370) Data frame received for 5 I0312 22:18:31.815715 6 log.go:172] (0xc001480000) (5) Data frame handling I0312 22:18:31.815735 6 log.go:172] (0xc001d20370) Data frame received for 3 I0312 22:18:31.815761 6 log.go:172] (0xc0027d3cc0) (3) Data frame handling I0312 22:18:31.815771 6 log.go:172] (0xc0027d3cc0) (3) Data frame sent I0312 22:18:31.815780 6 log.go:172] (0xc001d20370) Data frame received for 3 I0312 22:18:31.815786 6 log.go:172] (0xc0027d3cc0) (3) Data frame handling I0312 22:18:31.816634 6 log.go:172] (0xc001d20370) Data frame received for 1 I0312 22:18:31.816651 6 log.go:172] (0xc0027d3b80) (1) Data frame handling I0312 22:18:31.816661 6 log.go:172] (0xc0027d3b80) (1) Data frame sent I0312 22:18:31.816671 6 log.go:172] (0xc001d20370) (0xc0027d3b80) Stream removed, broadcasting: 1 I0312 22:18:31.816685 6 log.go:172] (0xc001d20370) Go away received I0312 22:18:31.816791 6 log.go:172] (0xc001d20370) (0xc0027d3b80) Stream removed, broadcasting: 1 I0312 22:18:31.816814 6 log.go:172] (0xc001d20370) (0xc0027d3cc0) Stream removed, broadcasting: 3 I0312 22:18:31.816823 6 log.go:172] (0xc001d20370) (0xc001480000) Stream removed, broadcasting: 5 Mar 12 22:18:31.816: INFO: Exec stderr: "" Mar 12 22:18:31.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:31.816: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:31.846724 6 log.go:172] (0xc00223ce70) (0xc000adb7c0) Create stream I0312 22:18:31.846748 6 log.go:172] (0xc00223ce70) (0xc000adb7c0) Stream added, broadcasting: 1 I0312 22:18:31.848628 6 log.go:172] (0xc00223ce70) Reply frame received for 1 I0312 22:18:31.848673 6 log.go:172] (0xc00223ce70) (0xc0014800a0) Create stream I0312 22:18:31.848684 6 log.go:172] (0xc00223ce70) (0xc0014800a0) Stream added, broadcasting: 3 I0312 22:18:31.849432 6 log.go:172] (0xc00223ce70) Reply frame received for 3 I0312 22:18:31.849478 6 log.go:172] (0xc00223ce70) (0xc001db7360) Create stream I0312 22:18:31.849495 6 log.go:172] (0xc00223ce70) (0xc001db7360) Stream added, broadcasting: 5 I0312 22:18:31.850390 6 log.go:172] (0xc00223ce70) Reply frame received for 5 I0312 22:18:31.915819 6 log.go:172] (0xc00223ce70) Data frame received for 5 I0312 22:18:31.915854 6 log.go:172] (0xc001db7360) (5) Data frame handling I0312 22:18:31.915875 6 log.go:172] (0xc00223ce70) Data frame received for 3 I0312 22:18:31.915887 6 log.go:172] (0xc0014800a0) (3) Data frame handling I0312 22:18:31.915903 6 log.go:172] (0xc0014800a0) (3) Data frame sent I0312 22:18:31.915919 6 log.go:172] (0xc00223ce70) Data frame received for 3 I0312 22:18:31.915934 6 log.go:172] (0xc0014800a0) (3) Data frame handling I0312 22:18:31.916986 6 log.go:172] (0xc00223ce70) Data frame received for 1 I0312 22:18:31.917004 6 log.go:172] (0xc000adb7c0) (1) Data frame handling I0312 22:18:31.917016 6 log.go:172] (0xc000adb7c0) (1) Data frame sent I0312 22:18:31.917030 6 log.go:172] (0xc00223ce70) (0xc000adb7c0) Stream removed, broadcasting: 1 I0312 22:18:31.917040 6 log.go:172] (0xc00223ce70) Go away received I0312 22:18:31.917170 6 log.go:172] (0xc00223ce70) (0xc000adb7c0) Stream removed, broadcasting: 1 I0312 22:18:31.917192 6 log.go:172] (0xc00223ce70) (0xc0014800a0) Stream removed, broadcasting: 3 I0312 22:18:31.917207 6 log.go:172] (0xc00223ce70) (0xc001db7360) Stream removed, broadcasting: 5 Mar 12 22:18:31.917: INFO: Exec stderr: "" Mar 12 22:18:31.917: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:31.917: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:31.937353 6 log.go:172] (0xc00223d4a0) (0xc000adba40) Create stream I0312 22:18:31.937378 6 log.go:172] (0xc00223d4a0) (0xc000adba40) Stream added, broadcasting: 1 I0312 22:18:31.939291 6 log.go:172] (0xc00223d4a0) Reply frame received for 1 I0312 22:18:31.939317 6 log.go:172] (0xc00223d4a0) (0xc001db7400) Create stream I0312 22:18:31.939326 6 log.go:172] (0xc00223d4a0) (0xc001db7400) Stream added, broadcasting: 3 I0312 22:18:31.940043 6 log.go:172] (0xc00223d4a0) Reply frame received for 3 I0312 22:18:31.940075 6 log.go:172] (0xc00223d4a0) (0xc001480320) Create stream I0312 22:18:31.940091 6 log.go:172] (0xc00223d4a0) (0xc001480320) Stream added, broadcasting: 5 I0312 22:18:31.940949 6 log.go:172] (0xc00223d4a0) Reply frame received for 5 I0312 22:18:31.991928 6 log.go:172] (0xc00223d4a0) Data frame received for 3 I0312 22:18:31.991958 6 log.go:172] (0xc001db7400) (3) Data frame handling I0312 22:18:31.991965 6 log.go:172] (0xc001db7400) (3) Data frame sent I0312 22:18:31.991969 6 log.go:172] (0xc00223d4a0) Data frame received for 3 I0312 22:18:31.991976 6 log.go:172] (0xc001db7400) (3) Data frame handling I0312 22:18:31.992025 6 log.go:172] (0xc00223d4a0) Data frame received for 5 I0312 22:18:31.992034 6 log.go:172] (0xc001480320) (5) Data frame handling I0312 22:18:31.992828 6 log.go:172] (0xc00223d4a0) Data frame received for 1 I0312 22:18:31.992840 6 log.go:172] (0xc000adba40) (1) Data frame handling I0312 22:18:31.992847 6 log.go:172] (0xc000adba40) (1) Data frame sent I0312 22:18:31.992861 6 log.go:172] (0xc00223d4a0) (0xc000adba40) Stream removed, broadcasting: 1 I0312 22:18:31.992879 6 log.go:172] (0xc00223d4a0) Go away received I0312 22:18:31.992986 6 log.go:172] (0xc00223d4a0) (0xc000adba40) Stream removed, broadcasting: 1 I0312 22:18:31.992999 6 log.go:172] (0xc00223d4a0) (0xc001db7400) Stream removed, broadcasting: 3 I0312 22:18:31.993006 6 log.go:172] (0xc00223d4a0) (0xc001480320) Stream removed, broadcasting: 5 Mar 12 22:18:31.993: INFO: Exec stderr: "" Mar 12 22:18:31.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:31.993: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.011223 6 log.go:172] (0xc001af8370) (0xc001480b40) Create stream I0312 22:18:32.011238 6 log.go:172] (0xc001af8370) (0xc001480b40) Stream added, broadcasting: 1 I0312 22:18:32.012933 6 log.go:172] (0xc001af8370) Reply frame received for 1 I0312 22:18:32.012951 6 log.go:172] (0xc001af8370) (0xc0027d3d60) Create stream I0312 22:18:32.012958 6 log.go:172] (0xc001af8370) (0xc0027d3d60) Stream added, broadcasting: 3 I0312 22:18:32.013453 6 log.go:172] (0xc001af8370) Reply frame received for 3 I0312 22:18:32.013482 6 log.go:172] (0xc001af8370) (0xc0019fda40) Create stream I0312 22:18:32.013496 6 log.go:172] (0xc001af8370) (0xc0019fda40) Stream added, broadcasting: 5 I0312 22:18:32.014107 6 log.go:172] (0xc001af8370) Reply frame received for 5 I0312 22:18:32.075962 6 log.go:172] (0xc001af8370) Data frame received for 5 I0312 22:18:32.075987 6 log.go:172] (0xc0019fda40) (5) Data frame handling I0312 22:18:32.076004 6 log.go:172] (0xc001af8370) Data frame received for 3 I0312 22:18:32.076012 6 log.go:172] (0xc0027d3d60) (3) Data frame handling I0312 22:18:32.076023 6 log.go:172] (0xc0027d3d60) (3) Data frame sent I0312 22:18:32.076029 6 log.go:172] (0xc001af8370) Data frame received for 3 I0312 22:18:32.076035 6 log.go:172] (0xc0027d3d60) (3) Data frame handling I0312 22:18:32.077045 6 log.go:172] (0xc001af8370) Data frame received for 1 I0312 22:18:32.077059 6 log.go:172] (0xc001480b40) (1) Data frame handling I0312 22:18:32.077067 6 log.go:172] (0xc001480b40) (1) Data frame sent I0312 22:18:32.077077 6 log.go:172] (0xc001af8370) (0xc001480b40) Stream removed, broadcasting: 1 I0312 22:18:32.077107 6 log.go:172] (0xc001af8370) Go away received I0312 22:18:32.077132 6 log.go:172] (0xc001af8370) (0xc001480b40) Stream removed, broadcasting: 1 I0312 22:18:32.077162 6 log.go:172] (0xc001af8370) (0xc0027d3d60) Stream removed, broadcasting: 3 I0312 22:18:32.077169 6 log.go:172] (0xc001af8370) (0xc0019fda40) Stream removed, broadcasting: 5 Mar 12 22:18:32.077: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 12 22:18:32.077: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.077: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.113574 6 log.go:172] (0xc001d209a0) (0xc0022fc0a0) Create stream I0312 22:18:32.113618 6 log.go:172] (0xc001d209a0) (0xc0022fc0a0) Stream added, broadcasting: 1 I0312 22:18:32.121285 6 log.go:172] (0xc001d209a0) Reply frame received for 1 I0312 22:18:32.121340 6 log.go:172] (0xc001d209a0) (0xc000adbae0) Create stream I0312 22:18:32.121354 6 log.go:172] (0xc001d209a0) (0xc000adbae0) Stream added, broadcasting: 3 I0312 22:18:32.122854 6 log.go:172] (0xc001d209a0) Reply frame received for 3 I0312 22:18:32.122884 6 log.go:172] (0xc001d209a0) (0xc000adbc20) Create stream I0312 22:18:32.122894 6 log.go:172] (0xc001d209a0) (0xc000adbc20) Stream added, broadcasting: 5 I0312 22:18:32.124499 6 log.go:172] (0xc001d209a0) Reply frame received for 5 I0312 22:18:32.179283 6 log.go:172] (0xc001d209a0) Data frame received for 5 I0312 22:18:32.179305 6 log.go:172] (0xc000adbc20) (5) Data frame handling I0312 22:18:32.179318 6 log.go:172] (0xc001d209a0) Data frame received for 3 I0312 22:18:32.179324 6 log.go:172] (0xc000adbae0) (3) Data frame handling I0312 22:18:32.179333 6 log.go:172] (0xc000adbae0) (3) Data frame sent I0312 22:18:32.179340 6 log.go:172] (0xc001d209a0) Data frame received for 3 I0312 22:18:32.179347 6 log.go:172] (0xc000adbae0) (3) Data frame handling I0312 22:18:32.180344 6 log.go:172] (0xc001d209a0) Data frame received for 1 I0312 22:18:32.180357 6 log.go:172] (0xc0022fc0a0) (1) Data frame handling I0312 22:18:32.180375 6 log.go:172] (0xc0022fc0a0) (1) Data frame sent I0312 22:18:32.180390 6 log.go:172] (0xc001d209a0) (0xc0022fc0a0) Stream removed, broadcasting: 1 I0312 22:18:32.180403 6 log.go:172] (0xc001d209a0) Go away received I0312 22:18:32.180501 6 log.go:172] (0xc001d209a0) (0xc0022fc0a0) Stream removed, broadcasting: 1 I0312 22:18:32.180516 6 log.go:172] (0xc001d209a0) (0xc000adbae0) Stream removed, broadcasting: 3 I0312 22:18:32.180528 6 log.go:172] (0xc001d209a0) (0xc000adbc20) Stream removed, broadcasting: 5 Mar 12 22:18:32.180: INFO: Exec stderr: "" Mar 12 22:18:32.180: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.180: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.200620 6 log.go:172] (0xc001eea4d0) (0xc001db7720) Create stream I0312 22:18:32.200644 6 log.go:172] (0xc001eea4d0) (0xc001db7720) Stream added, broadcasting: 1 I0312 22:18:32.203570 6 log.go:172] (0xc001eea4d0) Reply frame received for 1 I0312 22:18:32.203610 6 log.go:172] (0xc001eea4d0) (0xc0022fc140) Create stream I0312 22:18:32.203627 6 log.go:172] (0xc001eea4d0) (0xc0022fc140) Stream added, broadcasting: 3 I0312 22:18:32.205543 6 log.go:172] (0xc001eea4d0) Reply frame received for 3 I0312 22:18:32.205573 6 log.go:172] (0xc001eea4d0) (0xc0022fc1e0) Create stream I0312 22:18:32.205590 6 log.go:172] (0xc001eea4d0) (0xc0022fc1e0) Stream added, broadcasting: 5 I0312 22:18:32.207211 6 log.go:172] (0xc001eea4d0) Reply frame received for 5 I0312 22:18:32.258667 6 log.go:172] (0xc001eea4d0) Data frame received for 5 I0312 22:18:32.258687 6 log.go:172] (0xc0022fc1e0) (5) Data frame handling I0312 22:18:32.258701 6 log.go:172] (0xc001eea4d0) Data frame received for 3 I0312 22:18:32.258714 6 log.go:172] (0xc0022fc140) (3) Data frame handling I0312 22:18:32.258725 6 log.go:172] (0xc0022fc140) (3) Data frame sent I0312 22:18:32.258732 6 log.go:172] (0xc001eea4d0) Data frame received for 3 I0312 22:18:32.258738 6 log.go:172] (0xc0022fc140) (3) Data frame handling I0312 22:18:32.259652 6 log.go:172] (0xc001eea4d0) Data frame received for 1 I0312 22:18:32.259665 6 log.go:172] (0xc001db7720) (1) Data frame handling I0312 22:18:32.259680 6 log.go:172] (0xc001db7720) (1) Data frame sent I0312 22:18:32.259729 6 log.go:172] (0xc001eea4d0) (0xc001db7720) Stream removed, broadcasting: 1 I0312 22:18:32.259741 6 log.go:172] (0xc001eea4d0) Go away received I0312 22:18:32.259798 6 log.go:172] (0xc001eea4d0) (0xc001db7720) Stream removed, broadcasting: 1 I0312 22:18:32.259811 6 log.go:172] (0xc001eea4d0) (0xc0022fc140) Stream removed, broadcasting: 3 I0312 22:18:32.259835 6 log.go:172] (0xc001eea4d0) (0xc0022fc1e0) Stream removed, broadcasting: 5 Mar 12 22:18:32.259: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 12 22:18:32.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.259: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.279436 6 log.go:172] (0xc001d20fd0) (0xc0022fc500) Create stream I0312 22:18:32.279459 6 log.go:172] (0xc001d20fd0) (0xc0022fc500) Stream added, broadcasting: 1 I0312 22:18:32.281261 6 log.go:172] (0xc001d20fd0) Reply frame received for 1 I0312 22:18:32.281291 6 log.go:172] (0xc001d20fd0) (0xc001480c80) Create stream I0312 22:18:32.281301 6 log.go:172] (0xc001d20fd0) (0xc001480c80) Stream added, broadcasting: 3 I0312 22:18:32.281791 6 log.go:172] (0xc001d20fd0) Reply frame received for 3 I0312 22:18:32.281806 6 log.go:172] (0xc001d20fd0) (0xc000adbcc0) Create stream I0312 22:18:32.281813 6 log.go:172] (0xc001d20fd0) (0xc000adbcc0) Stream added, broadcasting: 5 I0312 22:18:32.282339 6 log.go:172] (0xc001d20fd0) Reply frame received for 5 I0312 22:18:32.313294 6 log.go:172] (0xc001d20fd0) Data frame received for 5 I0312 22:18:32.313307 6 log.go:172] (0xc000adbcc0) (5) Data frame handling I0312 22:18:32.313323 6 log.go:172] (0xc001d20fd0) Data frame received for 3 I0312 22:18:32.313341 6 log.go:172] (0xc001480c80) (3) Data frame handling I0312 22:18:32.313352 6 log.go:172] (0xc001480c80) (3) Data frame sent I0312 22:18:32.313366 6 log.go:172] (0xc001d20fd0) Data frame received for 3 I0312 22:18:32.313377 6 log.go:172] (0xc001480c80) (3) Data frame handling I0312 22:18:32.314268 6 log.go:172] (0xc001d20fd0) Data frame received for 1 I0312 22:18:32.314281 6 log.go:172] (0xc0022fc500) (1) Data frame handling I0312 22:18:32.314287 6 log.go:172] (0xc0022fc500) (1) Data frame sent I0312 22:18:32.314293 6 log.go:172] (0xc001d20fd0) (0xc0022fc500) Stream removed, broadcasting: 1 I0312 22:18:32.314305 6 log.go:172] (0xc001d20fd0) Go away received I0312 22:18:32.314401 6 log.go:172] (0xc001d20fd0) (0xc0022fc500) Stream removed, broadcasting: 1 I0312 22:18:32.314414 6 log.go:172] (0xc001d20fd0) (0xc001480c80) Stream removed, broadcasting: 3 I0312 22:18:32.314427 6 log.go:172] (0xc001d20fd0) (0xc000adbcc0) Stream removed, broadcasting: 5 Mar 12 22:18:32.314: INFO: Exec stderr: "" Mar 12 22:18:32.314: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.314: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.331671 6 log.go:172] (0xc001a302c0) (0xc0021ee280) Create stream I0312 22:18:32.331685 6 log.go:172] (0xc001a302c0) (0xc0021ee280) Stream added, broadcasting: 1 I0312 22:18:32.333151 6 log.go:172] (0xc001a302c0) Reply frame received for 1 I0312 22:18:32.333169 6 log.go:172] (0xc001a302c0) (0xc001db7ae0) Create stream I0312 22:18:32.333176 6 log.go:172] (0xc001a302c0) (0xc001db7ae0) Stream added, broadcasting: 3 I0312 22:18:32.333616 6 log.go:172] (0xc001a302c0) Reply frame received for 3 I0312 22:18:32.333633 6 log.go:172] (0xc001a302c0) (0xc000adbe00) Create stream I0312 22:18:32.333640 6 log.go:172] (0xc001a302c0) (0xc000adbe00) Stream added, broadcasting: 5 I0312 22:18:32.334066 6 log.go:172] (0xc001a302c0) Reply frame received for 5 I0312 22:18:32.399990 6 log.go:172] (0xc001a302c0) Data frame received for 5 I0312 22:18:32.400019 6 log.go:172] (0xc000adbe00) (5) Data frame handling I0312 22:18:32.400033 6 log.go:172] (0xc001a302c0) Data frame received for 3 I0312 22:18:32.400040 6 log.go:172] (0xc001db7ae0) (3) Data frame handling I0312 22:18:32.400048 6 log.go:172] (0xc001db7ae0) (3) Data frame sent I0312 22:18:32.400054 6 log.go:172] (0xc001a302c0) Data frame received for 3 I0312 22:18:32.400060 6 log.go:172] (0xc001db7ae0) (3) Data frame handling I0312 22:18:32.401339 6 log.go:172] (0xc001a302c0) Data frame received for 1 I0312 22:18:32.401377 6 log.go:172] (0xc0021ee280) (1) Data frame handling I0312 22:18:32.401437 6 log.go:172] (0xc0021ee280) (1) Data frame sent I0312 22:18:32.401470 6 log.go:172] (0xc001a302c0) (0xc0021ee280) Stream removed, broadcasting: 1 I0312 22:18:32.401488 6 log.go:172] (0xc001a302c0) Go away received I0312 22:18:32.401553 6 log.go:172] (0xc001a302c0) (0xc0021ee280) Stream removed, broadcasting: 1 I0312 22:18:32.401568 6 log.go:172] (0xc001a302c0) (0xc001db7ae0) Stream removed, broadcasting: 3 I0312 22:18:32.401578 6 log.go:172] (0xc001a302c0) (0xc000adbe00) Stream removed, broadcasting: 5 Mar 12 22:18:32.401: INFO: Exec stderr: "" Mar 12 22:18:32.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.401: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.425991 6 log.go:172] (0xc001eeab00) (0xc001374140) Create stream I0312 22:18:32.426008 6 log.go:172] (0xc001eeab00) (0xc001374140) Stream added, broadcasting: 1 I0312 22:18:32.432876 6 log.go:172] (0xc001eeab00) Reply frame received for 1 I0312 22:18:32.432926 6 log.go:172] (0xc001eeab00) (0xc0021ee320) Create stream I0312 22:18:32.432938 6 log.go:172] (0xc001eeab00) (0xc0021ee320) Stream added, broadcasting: 3 I0312 22:18:32.434036 6 log.go:172] (0xc001eeab00) Reply frame received for 3 I0312 22:18:32.434069 6 log.go:172] (0xc001eeab00) (0xc0022fc5a0) Create stream I0312 22:18:32.434078 6 log.go:172] (0xc001eeab00) (0xc0022fc5a0) Stream added, broadcasting: 5 I0312 22:18:32.434853 6 log.go:172] (0xc001eeab00) Reply frame received for 5 I0312 22:18:32.492509 6 log.go:172] (0xc001eeab00) Data frame received for 3 I0312 22:18:32.492534 6 log.go:172] (0xc0021ee320) (3) Data frame handling I0312 22:18:32.492543 6 log.go:172] (0xc0021ee320) (3) Data frame sent I0312 22:18:32.492556 6 log.go:172] (0xc001eeab00) Data frame received for 5 I0312 22:18:32.492563 6 log.go:172] (0xc0022fc5a0) (5) Data frame handling I0312 22:18:32.492603 6 log.go:172] (0xc001eeab00) Data frame received for 3 I0312 22:18:32.492624 6 log.go:172] (0xc0021ee320) (3) Data frame handling I0312 22:18:32.494040 6 log.go:172] (0xc001eeab00) Data frame received for 1 I0312 22:18:32.494051 6 log.go:172] (0xc001374140) (1) Data frame handling I0312 22:18:32.494063 6 log.go:172] (0xc001374140) (1) Data frame sent I0312 22:18:32.494072 6 log.go:172] (0xc001eeab00) (0xc001374140) Stream removed, broadcasting: 1 I0312 22:18:32.494221 6 log.go:172] (0xc001eeab00) (0xc001374140) Stream removed, broadcasting: 1 I0312 22:18:32.494236 6 log.go:172] (0xc001eeab00) (0xc0021ee320) Stream removed, broadcasting: 3 I0312 22:18:32.494265 6 log.go:172] (0xc001eeab00) Go away received I0312 22:18:32.494411 6 log.go:172] (0xc001eeab00) (0xc0022fc5a0) Stream removed, broadcasting: 5 Mar 12 22:18:32.494: INFO: Exec stderr: "" Mar 12 22:18:32.494: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-316 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 22:18:32.494: INFO: >>> kubeConfig: /root/.kube/config I0312 22:18:32.514672 6 log.go:172] (0xc001eeb130) (0xc001374780) Create stream I0312 22:18:32.514696 6 log.go:172] (0xc001eeb130) (0xc001374780) Stream added, broadcasting: 1 I0312 22:18:32.516280 6 log.go:172] (0xc001eeb130) Reply frame received for 1 I0312 22:18:32.516304 6 log.go:172] (0xc001eeb130) (0xc0022fc640) Create stream I0312 22:18:32.516313 6 log.go:172] (0xc001eeb130) (0xc0022fc640) Stream added, broadcasting: 3 I0312 22:18:32.517018 6 log.go:172] (0xc001eeb130) Reply frame received for 3 I0312 22:18:32.517037 6 log.go:172] (0xc001eeb130) (0xc0022fc6e0) Create stream I0312 22:18:32.517042 6 log.go:172] (0xc001eeb130) (0xc0022fc6e0) Stream added, broadcasting: 5 I0312 22:18:32.517803 6 log.go:172] (0xc001eeb130) Reply frame received for 5 I0312 22:18:32.588938 6 log.go:172] (0xc001eeb130) Data frame received for 3 I0312 22:18:32.588961 6 log.go:172] (0xc0022fc640) (3) Data frame handling I0312 22:18:32.588976 6 log.go:172] (0xc0022fc640) (3) Data frame sent I0312 22:18:32.588986 6 log.go:172] (0xc001eeb130) Data frame received for 3 I0312 22:18:32.588995 6 log.go:172] (0xc0022fc640) (3) Data frame handling I0312 22:18:32.589006 6 log.go:172] (0xc001eeb130) Data frame received for 5 I0312 22:18:32.589039 6 log.go:172] (0xc0022fc6e0) (5) Data frame handling I0312 22:18:32.590411 6 log.go:172] (0xc001eeb130) Data frame received for 1 I0312 22:18:32.590432 6 log.go:172] (0xc001374780) (1) Data frame handling I0312 22:18:32.590447 6 log.go:172] (0xc001374780) (1) Data frame sent I0312 22:18:32.590483 6 log.go:172] (0xc001eeb130) (0xc001374780) Stream removed, broadcasting: 1 I0312 22:18:32.590531 6 log.go:172] (0xc001eeb130) Go away received I0312 22:18:32.590631 6 log.go:172] (0xc001eeb130) (0xc001374780) Stream removed, broadcasting: 1 I0312 22:18:32.590652 6 log.go:172] (0xc001eeb130) (0xc0022fc640) Stream removed, broadcasting: 3 I0312 22:18:32.590675 6 log.go:172] (0xc001eeb130) (0xc0022fc6e0) Stream removed, broadcasting: 5 Mar 12 22:18:32.590: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:32.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-316" for this suite. • [SLOW TEST:9.004 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4273,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:32.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7431 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7431 I0312 22:18:32.726949 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7431, replica count: 2 I0312 22:18:35.777403 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 12 22:18:35.777: INFO: Creating new exec pod Mar 12 22:18:38.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7431 execpodbs6nr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 12 22:18:39.043: INFO: stderr: "I0312 22:18:38.958387 4318 log.go:172] (0xc000bb6000) (0xc0009e8000) Create stream\nI0312 22:18:38.958449 4318 log.go:172] (0xc000bb6000) (0xc0009e8000) Stream added, broadcasting: 1\nI0312 22:18:38.960719 4318 log.go:172] (0xc000bb6000) Reply frame received for 1\nI0312 22:18:38.960759 4318 log.go:172] (0xc000bb6000) (0xc0009c2000) Create stream\nI0312 22:18:38.960775 4318 log.go:172] (0xc000bb6000) (0xc0009c2000) Stream added, broadcasting: 3\nI0312 22:18:38.961795 4318 log.go:172] (0xc000bb6000) Reply frame received for 3\nI0312 22:18:38.961821 4318 log.go:172] (0xc000bb6000) (0xc0003f5680) Create stream\nI0312 22:18:38.961831 4318 log.go:172] (0xc000bb6000) (0xc0003f5680) Stream added, broadcasting: 5\nI0312 22:18:38.962665 4318 log.go:172] (0xc000bb6000) Reply frame received for 5\nI0312 22:18:39.036697 4318 log.go:172] (0xc000bb6000) Data frame received for 5\nI0312 22:18:39.036729 4318 log.go:172] (0xc0003f5680) (5) Data frame handling\nI0312 22:18:39.036753 4318 log.go:172] (0xc0003f5680) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0312 22:18:39.037649 4318 log.go:172] (0xc000bb6000) Data frame received for 5\nI0312 22:18:39.037670 4318 log.go:172] (0xc0003f5680) (5) Data frame handling\nI0312 22:18:39.037683 4318 log.go:172] (0xc0003f5680) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0312 22:18:39.038209 4318 log.go:172] (0xc000bb6000) Data frame received for 3\nI0312 22:18:39.038237 4318 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0312 22:18:39.038257 4318 log.go:172] (0xc000bb6000) Data frame received for 5\nI0312 22:18:39.038280 4318 log.go:172] (0xc0003f5680) (5) Data frame handling\nI0312 22:18:39.039719 4318 log.go:172] (0xc000bb6000) Data frame received for 1\nI0312 22:18:39.039734 4318 log.go:172] (0xc0009e8000) (1) Data frame handling\nI0312 22:18:39.039743 4318 log.go:172] (0xc0009e8000) (1) Data frame sent\nI0312 22:18:39.039757 4318 log.go:172] (0xc000bb6000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0312 22:18:39.039774 4318 log.go:172] (0xc000bb6000) Go away received\nI0312 22:18:39.040075 4318 log.go:172] (0xc000bb6000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0312 22:18:39.040089 4318 log.go:172] (0xc000bb6000) (0xc0009c2000) Stream removed, broadcasting: 3\nI0312 22:18:39.040095 4318 log.go:172] (0xc000bb6000) (0xc0003f5680) Stream removed, broadcasting: 5\n" Mar 12 22:18:39.043: INFO: stdout: "" Mar 12 22:18:39.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7431 execpodbs6nr -- /bin/sh -x -c nc -zv -t -w 2 10.105.74.93 80' Mar 12 22:18:39.199: INFO: stderr: "I0312 22:18:39.136728 4338 log.go:172] (0xc000a6b290) (0xc000a44780) Create stream\nI0312 22:18:39.136764 4338 log.go:172] (0xc000a6b290) (0xc000a44780) Stream added, broadcasting: 1\nI0312 22:18:39.140712 4338 log.go:172] (0xc000a6b290) Reply frame received for 1\nI0312 22:18:39.140747 4338 log.go:172] (0xc000a6b290) (0xc000659b80) Create stream\nI0312 22:18:39.140754 4338 log.go:172] (0xc000a6b290) (0xc000659b80) Stream added, broadcasting: 3\nI0312 22:18:39.143388 4338 log.go:172] (0xc000a6b290) Reply frame received for 3\nI0312 22:18:39.143418 4338 log.go:172] (0xc000a6b290) (0xc00061e780) Create stream\nI0312 22:18:39.143425 4338 log.go:172] (0xc000a6b290) (0xc00061e780) Stream added, broadcasting: 5\nI0312 22:18:39.144615 4338 log.go:172] (0xc000a6b290) Reply frame received for 5\nI0312 22:18:39.195440 4338 log.go:172] (0xc000a6b290) Data frame received for 5\nI0312 22:18:39.195464 4338 log.go:172] (0xc00061e780) (5) Data frame handling\nI0312 22:18:39.195472 4338 log.go:172] (0xc00061e780) (5) Data frame sent\nI0312 22:18:39.195478 4338 log.go:172] (0xc000a6b290) Data frame received for 5\n+ nc -zv -t -w 2 10.105.74.93 80\nConnection to 10.105.74.93 80 port [tcp/http] succeeded!\nI0312 22:18:39.195498 4338 log.go:172] (0xc000a6b290) Data frame received for 3\nI0312 22:18:39.195534 4338 log.go:172] (0xc000659b80) (3) Data frame handling\nI0312 22:18:39.195552 4338 log.go:172] (0xc00061e780) (5) Data frame handling\nI0312 22:18:39.196201 4338 log.go:172] (0xc000a6b290) Data frame received for 1\nI0312 22:18:39.196213 4338 log.go:172] (0xc000a44780) (1) Data frame handling\nI0312 22:18:39.196223 4338 log.go:172] (0xc000a44780) (1) Data frame sent\nI0312 22:18:39.196267 4338 log.go:172] (0xc000a6b290) (0xc000a44780) Stream removed, broadcasting: 1\nI0312 22:18:39.196366 4338 log.go:172] (0xc000a6b290) Go away received\nI0312 22:18:39.196636 4338 log.go:172] (0xc000a6b290) (0xc000a44780) Stream removed, broadcasting: 1\nI0312 22:18:39.196656 4338 log.go:172] (0xc000a6b290) (0xc000659b80) Stream removed, broadcasting: 3\nI0312 22:18:39.196665 4338 log.go:172] (0xc000a6b290) (0xc00061e780) Stream removed, broadcasting: 5\n" Mar 12 22:18:39.199: INFO: stdout: "" Mar 12 22:18:39.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7431 execpodbs6nr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31255' Mar 12 22:18:39.363: INFO: stderr: "I0312 22:18:39.302674 4360 log.go:172] (0xc000960000) (0xc000aac0a0) Create stream\nI0312 22:18:39.302709 4360 log.go:172] (0xc000960000) (0xc000aac0a0) Stream added, broadcasting: 1\nI0312 22:18:39.304285 4360 log.go:172] (0xc000960000) Reply frame received for 1\nI0312 22:18:39.304326 4360 log.go:172] (0xc000960000) (0xc0006d3d60) Create stream\nI0312 22:18:39.304335 4360 log.go:172] (0xc000960000) (0xc0006d3d60) Stream added, broadcasting: 3\nI0312 22:18:39.304830 4360 log.go:172] (0xc000960000) Reply frame received for 3\nI0312 22:18:39.304847 4360 log.go:172] (0xc000960000) (0xc000aac140) Create stream\nI0312 22:18:39.304853 4360 log.go:172] (0xc000960000) (0xc000aac140) Stream added, broadcasting: 5\nI0312 22:18:39.305479 4360 log.go:172] (0xc000960000) Reply frame received for 5\nI0312 22:18:39.359259 4360 log.go:172] (0xc000960000) Data frame received for 5\nI0312 22:18:39.359282 4360 log.go:172] (0xc000aac140) (5) Data frame handling\nI0312 22:18:39.359295 4360 log.go:172] (0xc000aac140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.4 31255\nConnection to 172.17.0.4 31255 port [tcp/31255] succeeded!\nI0312 22:18:39.359390 4360 log.go:172] (0xc000960000) Data frame received for 5\nI0312 22:18:39.359396 4360 log.go:172] (0xc000aac140) (5) Data frame handling\nI0312 22:18:39.359421 4360 log.go:172] (0xc000960000) Data frame received for 3\nI0312 22:18:39.359437 4360 log.go:172] (0xc0006d3d60) (3) Data frame handling\nI0312 22:18:39.360383 4360 log.go:172] (0xc000960000) Data frame received for 1\nI0312 22:18:39.360415 4360 log.go:172] (0xc000aac0a0) (1) Data frame handling\nI0312 22:18:39.360428 4360 log.go:172] (0xc000aac0a0) (1) Data frame sent\nI0312 22:18:39.360442 4360 log.go:172] (0xc000960000) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0312 22:18:39.360457 4360 log.go:172] (0xc000960000) Go away received\nI0312 22:18:39.360703 4360 log.go:172] (0xc000960000) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0312 22:18:39.360714 4360 log.go:172] (0xc000960000) (0xc0006d3d60) Stream removed, broadcasting: 3\nI0312 22:18:39.360719 4360 log.go:172] (0xc000960000) (0xc000aac140) Stream removed, broadcasting: 5\n" Mar 12 22:18:39.363: INFO: stdout: "" Mar 12 22:18:39.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7431 execpodbs6nr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31255' Mar 12 22:18:39.530: INFO: stderr: "I0312 22:18:39.454224 4378 log.go:172] (0xc0000f4dc0) (0xc000908000) Create stream\nI0312 22:18:39.454258 4378 log.go:172] (0xc0000f4dc0) (0xc000908000) Stream added, broadcasting: 1\nI0312 22:18:39.455783 4378 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0312 22:18:39.455805 4378 log.go:172] (0xc0000f4dc0) (0xc000699b80) Create stream\nI0312 22:18:39.455812 4378 log.go:172] (0xc0000f4dc0) (0xc000699b80) Stream added, broadcasting: 3\nI0312 22:18:39.456278 4378 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0312 22:18:39.456295 4378 log.go:172] (0xc0000f4dc0) (0xc000290000) Create stream\nI0312 22:18:39.456301 4378 log.go:172] (0xc0000f4dc0) (0xc000290000) Stream added, broadcasting: 5\nI0312 22:18:39.456739 4378 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0312 22:18:39.525198 4378 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0312 22:18:39.525221 4378 log.go:172] (0xc000290000) (5) Data frame handling\nI0312 22:18:39.525228 4378 log.go:172] (0xc000290000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.5 31255\nConnection to 172.17.0.5 31255 port [tcp/31255] succeeded!\nI0312 22:18:39.525504 4378 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0312 22:18:39.525526 4378 log.go:172] (0xc000290000) (5) Data frame handling\nI0312 22:18:39.525992 4378 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0312 22:18:39.526014 4378 log.go:172] (0xc000699b80) (3) Data frame handling\nI0312 22:18:39.527137 4378 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0312 22:18:39.527162 4378 log.go:172] (0xc000908000) (1) Data frame handling\nI0312 22:18:39.527184 4378 log.go:172] (0xc000908000) (1) Data frame sent\nI0312 22:18:39.527207 4378 log.go:172] (0xc0000f4dc0) (0xc000908000) Stream removed, broadcasting: 1\nI0312 22:18:39.527223 4378 log.go:172] (0xc0000f4dc0) Go away received\nI0312 22:18:39.527451 4378 log.go:172] (0xc0000f4dc0) (0xc000908000) Stream removed, broadcasting: 1\nI0312 22:18:39.527465 4378 log.go:172] (0xc0000f4dc0) (0xc000699b80) Stream removed, broadcasting: 3\nI0312 22:18:39.527470 4378 log.go:172] (0xc0000f4dc0) (0xc000290000) Stream removed, broadcasting: 5\n" Mar 12 22:18:39.531: INFO: stdout: "" Mar 12 22:18:39.531: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:39.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7431" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.045 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":266,"skipped":4286,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:39.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1993.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1993.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 22:18:43.738: INFO: DNS probes using dns-1993/dns-test-507660ca-423f-49cd-a94f-f9f4b037021f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:43.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1993" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":267,"skipped":4306,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:43.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-24b6f150-3c3e-4ebd-8e83-cadf41a18398 STEP: Creating a pod to test consume secrets Mar 12 22:18:43.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649" in namespace "projected-5643" to be "success or failure" Mar 12 22:18:43.939: INFO: Pod "pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649": Phase="Pending", Reason="", readiness=false. Elapsed: 75.732703ms Mar 12 22:18:45.942: INFO: Pod "pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078813298s STEP: Saw pod success Mar 12 22:18:45.942: INFO: Pod "pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649" satisfied condition "success or failure" Mar 12 22:18:45.945: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649 container projected-secret-volume-test: STEP: delete the pod Mar 12 22:18:46.010: INFO: Waiting for pod pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649 to disappear Mar 12 22:18:46.013: INFO: Pod pod-projected-secrets-c29d18ea-b687-4b39-95f5-0cc4ba3f4649 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:46.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5643" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4306,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:46.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:18:46.070: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6e60bbb9-99a4-4be0-a858-f352487252f0" in namespace "security-context-test-9423" to be "success or failure" Mar 12 22:18:46.084: INFO: Pod "busybox-user-65534-6e60bbb9-99a4-4be0-a858-f352487252f0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.735177ms Mar 12 22:18:48.088: INFO: Pod "busybox-user-65534-6e60bbb9-99a4-4be0-a858-f352487252f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0187405s Mar 12 22:18:48.088: INFO: Pod "busybox-user-65534-6e60bbb9-99a4-4be0-a858-f352487252f0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:48.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9423" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:48.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 12 22:18:50.341: INFO: Waiting up to 5m0s for pod "client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5" in namespace "pods-9452" to be "success or failure" Mar 12 22:18:50.353: INFO: Pod "client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.56885ms Mar 12 22:18:52.356: INFO: Pod "client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014832078s STEP: Saw pod success Mar 12 22:18:52.356: INFO: Pod "client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5" satisfied condition "success or failure" Mar 12 22:18:52.358: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5 container env3cont: STEP: delete the pod Mar 12 22:18:52.377: INFO: Waiting for pod client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5 to disappear Mar 12 22:18:52.406: INFO: Pod client-envvars-9e933d8d-15e1-4283-bf13-760d6bc076d5 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:18:52.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9452" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4374,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:18:52.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 12 22:18:52.492: INFO: >>> kubeConfig: /root/.kube/config Mar 12 22:18:55.437: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:06.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-811" for this suite. • [SLOW TEST:13.939 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":271,"skipped":4376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:06.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:23.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1952" for this suite. • [SLOW TEST:17.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":272,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:23.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 12 22:19:23.529: INFO: Waiting up to 5m0s for pod "downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6" in namespace "downward-api-7514" to be "success or failure" Mar 12 22:19:23.547: INFO: Pod "downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.234561ms Mar 12 22:19:25.551: INFO: Pod "downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022027138s STEP: Saw pod success Mar 12 22:19:25.551: INFO: Pod "downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6" satisfied condition "success or failure" Mar 12 22:19:25.554: INFO: Trying to get logs from node jerma-worker pod downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6 container dapi-container: STEP: delete the pod Mar 12 22:19:25.605: INFO: Waiting for pod downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6 to disappear Mar 12 22:19:25.617: INFO: Pod downward-api-0420a036-7e9e-4fc3-950e-dd4b32992de6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:25.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7514" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4423,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:25.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6936" for this suite. • [SLOW TEST:7.065 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":274,"skipped":4433,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:32.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 12 22:19:32.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5505 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 12 22:19:34.162: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0312 22:19:34.080081 4401 log.go:172] (0xc00001d290) (0xc0006f7c20) Create stream\nI0312 22:19:34.080130 4401 log.go:172] (0xc00001d290) (0xc0006f7c20) Stream added, broadcasting: 1\nI0312 22:19:34.081966 4401 log.go:172] (0xc00001d290) Reply frame received for 1\nI0312 22:19:34.081992 4401 log.go:172] (0xc00001d290) (0xc000a3e0a0) Create stream\nI0312 22:19:34.082000 4401 log.go:172] (0xc00001d290) (0xc000a3e0a0) Stream added, broadcasting: 3\nI0312 22:19:34.082558 4401 log.go:172] (0xc00001d290) Reply frame received for 3\nI0312 22:19:34.082582 4401 log.go:172] (0xc00001d290) (0xc000a3e140) Create stream\nI0312 22:19:34.082588 4401 log.go:172] (0xc00001d290) (0xc000a3e140) Stream added, broadcasting: 5\nI0312 22:19:34.083082 4401 log.go:172] (0xc00001d290) Reply frame received for 5\nI0312 22:19:34.083102 4401 log.go:172] (0xc00001d290) (0xc0006f7cc0) Create stream\nI0312 22:19:34.083109 4401 log.go:172] (0xc00001d290) (0xc0006f7cc0) Stream added, broadcasting: 7\nI0312 22:19:34.083726 4401 log.go:172] (0xc00001d290) Reply frame received for 7\nI0312 22:19:34.083821 4401 log.go:172] (0xc000a3e0a0) (3) Writing data frame\nI0312 22:19:34.083904 4401 log.go:172] (0xc000a3e0a0) (3) Writing data frame\nI0312 22:19:34.085384 4401 log.go:172] (0xc00001d290) Data frame received for 5\nI0312 22:19:34.085400 4401 log.go:172] (0xc000a3e140) (5) Data frame handling\nI0312 22:19:34.085412 4401 log.go:172] (0xc000a3e140) (5) Data frame sent\nI0312 22:19:34.088481 4401 log.go:172] (0xc00001d290) Data frame received for 5\nI0312 22:19:34.088493 4401 log.go:172] (0xc000a3e140) (5) Data frame handling\nI0312 22:19:34.088504 4401 log.go:172] (0xc000a3e140) (5) Data frame sent\nI0312 22:19:34.119288 4401 log.go:172] (0xc00001d290) Data frame received for 5\nI0312 22:19:34.119306 4401 log.go:172] (0xc000a3e140) (5) Data frame handling\nI0312 22:19:34.119485 4401 log.go:172] (0xc00001d290) Data frame received for 7\nI0312 22:19:34.119499 4401 log.go:172] (0xc0006f7cc0) (7) Data frame handling\nI0312 22:19:34.119932 4401 log.go:172] (0xc00001d290) Data frame received for 1\nI0312 22:19:34.119944 4401 log.go:172] (0xc0006f7c20) (1) Data frame handling\nI0312 22:19:34.119950 4401 log.go:172] (0xc0006f7c20) (1) Data frame sent\nI0312 22:19:34.119979 4401 log.go:172] (0xc00001d290) (0xc000a3e0a0) Stream removed, broadcasting: 3\nI0312 22:19:34.120029 4401 log.go:172] (0xc00001d290) (0xc0006f7c20) Stream removed, broadcasting: 1\nI0312 22:19:34.120052 4401 log.go:172] (0xc00001d290) Go away received\nI0312 22:19:34.120261 4401 log.go:172] (0xc00001d290) (0xc0006f7c20) Stream removed, broadcasting: 1\nI0312 22:19:34.120272 4401 log.go:172] (0xc00001d290) (0xc000a3e0a0) Stream removed, broadcasting: 3\nI0312 22:19:34.120277 4401 log.go:172] (0xc00001d290) (0xc000a3e140) Stream removed, broadcasting: 5\nI0312 22:19:34.120281 4401 log.go:172] (0xc00001d290) (0xc0006f7cc0) Stream removed, broadcasting: 7\n" Mar 12 22:19:34.163: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:36.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5505" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":275,"skipped":4440,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:36.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-g8qc STEP: Creating a pod to test atomic-volume-subpath Mar 12 22:19:36.254: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g8qc" in namespace "subpath-1697" to be "success or failure" Mar 12 22:19:36.274: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.250127ms Mar 12 22:19:38.276: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 2.02169466s Mar 12 22:19:40.279: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 4.024492531s Mar 12 22:19:42.282: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 6.027270865s Mar 12 22:19:44.284: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 8.030150102s Mar 12 22:19:46.288: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 10.033230667s Mar 12 22:19:48.291: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 12.036662558s Mar 12 22:19:50.294: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 14.039836238s Mar 12 22:19:52.297: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 16.042834609s Mar 12 22:19:54.300: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 18.045788363s Mar 12 22:19:56.304: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Running", Reason="", readiness=true. Elapsed: 20.049308447s Mar 12 22:19:58.306: INFO: Pod "pod-subpath-test-configmap-g8qc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051925189s STEP: Saw pod success Mar 12 22:19:58.306: INFO: Pod "pod-subpath-test-configmap-g8qc" satisfied condition "success or failure" Mar 12 22:19:58.308: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-g8qc container test-container-subpath-configmap-g8qc: STEP: delete the pod Mar 12 22:19:58.331: INFO: Waiting for pod pod-subpath-test-configmap-g8qc to disappear Mar 12 22:19:58.336: INFO: Pod pod-subpath-test-configmap-g8qc no longer exists STEP: Deleting pod pod-subpath-test-configmap-g8qc Mar 12 22:19:58.336: INFO: Deleting pod "pod-subpath-test-configmap-g8qc" in namespace "subpath-1697" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:58.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1697" for this suite. • [SLOW TEST:22.168 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:58.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 12 22:19:58.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 12 22:19:58.580: INFO: stderr: "" Mar 12 22:19:58.580: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:19:58.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1807" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":277,"skipped":4531,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 12 22:19:58.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 12 22:20:01.204: INFO: Successfully updated pod "labelsupdate17cb537d-df58-4e6d-a3fb-6b0d60b5eec5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 12 22:20:05.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6412" for this suite. • [SLOW TEST:6.659 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4533,"failed":0} SSSMar 12 22:20:05.247: INFO: Running AfterSuite actions on all nodes Mar 12 22:20:05.247: INFO: Running AfterSuite actions on node 1 Mar 12 22:20:05.247: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0} Ran 278 of 4814 Specs in 4287.805 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped PASS